Test Report: Docker_Linux_containerd_arm64 22089

                    
                      334c0a8a01ce6327cc86bd51efb70eb94afee1a0:2025-12-10:42712
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 501.72
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.77
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.21
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.44
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.49
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.2
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.27
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.67
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.13
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.48
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.64
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.41
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.14
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 125.2
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.25
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.25
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.24
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 1.63
358 TestKubernetesUpgrade 804.4
431 TestStartStop/group/no-preload/serial/FirstStart 512.19
437 TestStartStop/group/newest-cni/serial/FirstStart 501.9
438 TestStartStop/group/no-preload/serial/DeployApp 3
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 97.95
441 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 88.65
444 TestStartStop/group/no-preload/serial/SecondStart 373.77
447 TestStartStop/group/newest-cni/serial/SecondStart 374.99
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.49
452 TestStartStop/group/newest-cni/serial/Pause 11
487 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 270.88
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1210 06:25:14.424150  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:42.125124  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.782593  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.789163  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.800627  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.822106  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.863613  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.945128  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:36.106721  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:36.428357  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:37.070353  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:38.351889  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:40.914772  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:46.036258  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:56.277741  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:28:16.759475  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:28:57.720948  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:30:14.429471  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:30:19.646145  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.266236464s)

                                                
                                                
-- stdout --
	* [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Found network options:
	  - HTTP_PROXY=localhost:46303
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:46303 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001121505s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00106645s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00106645s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 6 (323.371666ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:31:39.116895  830272 status.go:458] kubeconfig endpoint: get endpoint: "functional-534748" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-634209 ssh sudo umount -f /mount-9p                                                                                                          │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdspecific-port1205426619/001:/mount-9p --alsologtostderr -v=1 --port 46464                       │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh            │ functional-634209 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh            │ functional-634209 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh -- ls -la /mount-9p                                                                                                               │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh sudo umount -f /mount-9p                                                                                                          │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount1 --alsologtostderr -v=1                                      │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount2 --alsologtostderr -v=1                                      │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount3 --alsologtostderr -v=1                                      │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh            │ functional-634209 ssh findmnt -T /mount1                                                                                                                │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh findmnt -T /mount2                                                                                                                │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh findmnt -T /mount3                                                                                                                │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ mount          │ -p functional-634209 --kill=true                                                                                                                        │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ update-context │ functional-634209 update-context --alsologtostderr -v=2                                                                                                 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ update-context │ functional-634209 update-context --alsologtostderr -v=2                                                                                                 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ update-context │ functional-634209 update-context --alsologtostderr -v=2                                                                                                 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls --format short --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls --format yaml --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh pgrep buildkitd                                                                                                                   │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ image          │ functional-634209 image ls --format json --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr                                                  │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls --format table --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls                                                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete         │ -p functional-634209                                                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start          │ -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:23:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:23:18.561811  824724 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:23:18.561934  824724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:18.561938  824724 out.go:374] Setting ErrFile to fd 2...
	I1210 06:23:18.561943  824724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:18.562176  824724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:23:18.562596  824724 out.go:368] Setting JSON to false
	I1210 06:23:18.563405  824724 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18323,"bootTime":1765329476,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:23:18.563461  824724 start.go:143] virtualization:  
	I1210 06:23:18.567966  824724 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:23:18.572615  824724 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:23:18.572738  824724 notify.go:221] Checking for updates...
	I1210 06:23:18.579560  824724 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:23:18.582785  824724 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:23:18.585998  824724 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:23:18.589203  824724 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:23:18.592315  824724 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:23:18.595531  824724 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:23:18.616494  824724 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:23:18.616614  824724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:18.685828  824724 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 06:23:18.676798926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:23:18.685928  824724 docker.go:319] overlay module found
	I1210 06:23:18.689262  824724 out.go:179] * Using the docker driver based on user configuration
	I1210 06:23:18.692221  824724 start.go:309] selected driver: docker
	I1210 06:23:18.692229  824724 start.go:927] validating driver "docker" against <nil>
	I1210 06:23:18.692240  824724 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:23:18.692974  824724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:18.746303  824724 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 06:23:18.736875636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:23:18.746448  824724 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:23:18.746748  824724 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:23:18.749798  824724 out.go:179] * Using Docker driver with root privileges
	I1210 06:23:18.752644  824724 cni.go:84] Creating CNI manager for ""
	I1210 06:23:18.752700  824724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:23:18.752706  824724 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:23:18.752781  824724 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:18.755854  824724 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:23:18.758672  824724 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:23:18.761562  824724 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:23:18.764465  824724 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:23:18.764502  824724 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:23:18.764509  824724 cache.go:65] Caching tarball of preloaded images
	I1210 06:23:18.764515  824724 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:23:18.764600  824724 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:23:18.764609  824724 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:23:18.764951  824724 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:23:18.764969  824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json: {Name:mk60a55156bfb56daf7cb6bb30d194027be79f16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.783876  824724 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:23:18.783888  824724 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:23:18.783900  824724 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:23:18.783937  824724 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:23:18.784041  824724 start.go:364] duration metric: took 90.307µs to acquireMachinesLock for "functional-534748"
	I1210 06:23:18.784065  824724 start.go:93] Provisioning new machine with config: &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:23:18.784141  824724 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:23:18.787477  824724 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1210 06:23:18.787747  824724 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46303 to docker env.
	I1210 06:23:18.787771  824724 start.go:159] libmachine.API.Create for "functional-534748" (driver="docker")
	I1210 06:23:18.787791  824724 client.go:173] LocalClient.Create starting
	I1210 06:23:18.787850  824724 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 06:23:18.787885  824724 main.go:143] libmachine: Decoding PEM data...
	I1210 06:23:18.787900  824724 main.go:143] libmachine: Parsing certificate...
	I1210 06:23:18.787948  824724 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 06:23:18.787964  824724 main.go:143] libmachine: Decoding PEM data...
	I1210 06:23:18.787975  824724 main.go:143] libmachine: Parsing certificate...
	I1210 06:23:18.788340  824724 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:23:18.803245  824724 cli_runner.go:211] docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:23:18.803346  824724 network_create.go:284] running [docker network inspect functional-534748] to gather additional debugging logs...
	I1210 06:23:18.803363  824724 cli_runner.go:164] Run: docker network inspect functional-534748
	W1210 06:23:18.819409  824724 cli_runner.go:211] docker network inspect functional-534748 returned with exit code 1
	I1210 06:23:18.819428  824724 network_create.go:287] error running [docker network inspect functional-534748]: docker network inspect functional-534748: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-534748 not found
	I1210 06:23:18.819440  824724 network_create.go:289] output of [docker network inspect functional-534748]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-534748 not found
	
	** /stderr **
	I1210 06:23:18.819584  824724 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:23:18.836286  824724 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197a210}
	I1210 06:23:18.836317  824724 network_create.go:124] attempt to create docker network functional-534748 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 06:23:18.836374  824724 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-534748 functional-534748
	I1210 06:23:18.898735  824724 network_create.go:108] docker network functional-534748 192.168.49.0/24 created
	I1210 06:23:18.898766  824724 kic.go:121] calculated static IP "192.168.49.2" for the "functional-534748" container
	I1210 06:23:18.898840  824724 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:23:18.915016  824724 cli_runner.go:164] Run: docker volume create functional-534748 --label name.minikube.sigs.k8s.io=functional-534748 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:23:18.932547  824724 oci.go:103] Successfully created a docker volume functional-534748
	I1210 06:23:18.932643  824724 cli_runner.go:164] Run: docker run --rm --name functional-534748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-534748 --entrypoint /usr/bin/test -v functional-534748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 06:23:19.518021  824724 oci.go:107] Successfully prepared a docker volume functional-534748
	I1210 06:23:19.518071  824724 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:23:19.518079  824724 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 06:23:19.518148  824724 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-534748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 06:23:23.539706  824724 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-534748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.021522998s)
	I1210 06:23:23.539729  824724 kic.go:203] duration metric: took 4.02164688s to extract preloaded images to volume ...
	W1210 06:23:23.539870  824724 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:23:23.539977  824724 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:23:23.592919  824724 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-534748 --name functional-534748 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-534748 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-534748 --network functional-534748 --ip 192.168.49.2 --volume functional-534748:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 06:23:23.918744  824724 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Running}}
	I1210 06:23:23.946412  824724 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:23:23.968713  824724 cli_runner.go:164] Run: docker exec functional-534748 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:23:24.021764  824724 oci.go:144] the created container "functional-534748" has a running status.
	I1210 06:23:24.021783  824724 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa...
	I1210 06:23:24.182440  824724 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:23:24.220260  824724 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:23:24.245276  824724 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:23:24.245287  824724 kic_runner.go:114] Args: [docker exec --privileged functional-534748 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:23:24.309824  824724 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:23:24.336706  824724 machine.go:94] provisionDockerMachine start ...
	I1210 06:23:24.336816  824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:23:24.370354  824724 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:24.370926  824724 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:23:24.370948  824724 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:23:24.371869  824724 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:23:27.510298  824724 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:23:27.510313  824724 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:23:27.510376  824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:23:27.527671  824724 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:27.527979  824724 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:23:27.527988  824724 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:23:27.671809  824724 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:23:27.671887  824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:23:27.690829  824724 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:27.691147  824724 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:23:27.691161  824724 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:23:27.827212  824724 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:23:27.827230  824724 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:23:27.827263  824724 ubuntu.go:190] setting up certificates
	I1210 06:23:27.827270  824724 provision.go:84] configureAuth start
	I1210 06:23:27.827331  824724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:23:27.844782  824724 provision.go:143] copyHostCerts
	I1210 06:23:27.844841  824724 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:23:27.844849  824724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:23:27.844927  824724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:23:27.845086  824724 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:23:27.845097  824724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:23:27.845125  824724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:23:27.845178  824724 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:23:27.845182  824724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:23:27.845208  824724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:23:27.845251  824724 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:23:28.092554  824724 provision.go:177] copyRemoteCerts
	I1210 06:23:28.092615  824724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:23:28.092669  824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:23:28.111311  824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:23:28.211137  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:23:28.229630  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:23:28.247857  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:23:28.265684  824724 provision.go:87] duration metric: took 438.390632ms to configureAuth
	I1210 06:23:28.265700  824724 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:23:28.265893  824724 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:23:28.265900  824724 machine.go:97] duration metric: took 3.929182228s to provisionDockerMachine
	I1210 06:23:28.265906  824724 client.go:176] duration metric: took 9.478110735s to LocalClient.Create
	I1210 06:23:28.265920  824724 start.go:167] duration metric: took 9.478150588s to libmachine.API.Create "functional-534748"
	I1210 06:23:28.265925  824724 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:23:28.265935  824724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:23:28.265985  824724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:23:28.266022  824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:23:28.283610  824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:23:28.382841  824724 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:23:28.386415  824724 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:23:28.386433  824724 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:23:28.386445  824724 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:23:28.386525  824724 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:23:28.386615  824724 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:23:28.386699  824724 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:23:28.386743  824724 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:23:28.394793  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:23:28.413551  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:23:28.432312  824724 start.go:296] duration metric: took 166.372215ms for postStartSetup
	I1210 06:23:28.432697  824724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:23:28.451224  824724 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:23:28.451545  824724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:23:28.451602  824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:23:28.472090  824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:23:28.567977  824724 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:23:28.572788  824724 start.go:128] duration metric: took 9.788632154s to createHost
	I1210 06:23:28.572804  824724 start.go:83] releasing machines lock for "functional-534748", held for 9.788754995s
	I1210 06:23:28.572884  824724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:23:28.593611  824724 out.go:179] * Found network options:
	I1210 06:23:28.596602  824724 out.go:179]   - HTTP_PROXY=localhost:46303
	W1210 06:23:28.599496  824724 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1210 06:23:28.602371  824724 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1210 06:23:28.605113  824724 ssh_runner.go:195] Run: cat /version.json
	I1210 06:23:28.605165  824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:23:28.605177  824724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:23:28.605237  824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:23:28.629775  824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:23:28.640272  824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:23:28.730494  824724 ssh_runner.go:195] Run: systemctl --version
	I1210 06:23:28.826265  824724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:23:28.830807  824724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:23:28.830871  824724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:23:28.858711  824724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:23:28.858741  824724 start.go:496] detecting cgroup driver to use...
	I1210 06:23:28.858775  824724 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:23:28.858828  824724 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:23:28.875395  824724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:23:28.889278  824724 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:23:28.889348  824724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:23:28.907488  824724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:23:28.926002  824724 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:23:29.053665  824724 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:23:29.177719  824724 docker.go:234] disabling docker service ...
	I1210 06:23:29.177783  824724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:23:29.201711  824724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:23:29.216552  824724 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:23:29.341854  824724 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:23:29.472954  824724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:23:29.485915  824724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:23:29.500245  824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:23:29.509024  824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:23:29.518257  824724 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:23:29.518332  824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:23:29.527159  824724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:23:29.535968  824724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:23:29.544602  824724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:23:29.553251  824724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:23:29.561523  824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:23:29.570558  824724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:23:29.579030  824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:23:29.588073  824724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:23:29.595737  824724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:23:29.603096  824724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:23:29.719604  824724 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:23:29.857543  824724 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:23:29.857607  824724 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:23:29.861945  824724 start.go:564] Will wait 60s for crictl version
	I1210 06:23:29.862002  824724 ssh_runner.go:195] Run: which crictl
	I1210 06:23:29.865723  824724 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:23:29.896288  824724 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:23:29.896349  824724 ssh_runner.go:195] Run: containerd --version
	I1210 06:23:29.916811  824724 ssh_runner.go:195] Run: containerd --version
	I1210 06:23:29.941490  824724 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:23:29.944391  824724 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:23:29.960572  824724 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:23:29.964489  824724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:23:29.974386  824724 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:23:29.974570  824724 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:23:29.974644  824724 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:23:29.999694  824724 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:23:29.999706  824724 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:23:29.999767  824724 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:23:30.037219  824724 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:23:30.037233  824724 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:23:30.037240  824724 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:23:30.037354  824724 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:23:30.037435  824724 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:23:30.075940  824724 cni.go:84] Creating CNI manager for ""
	I1210 06:23:30.075952  824724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:23:30.075976  824724 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:23:30.075999  824724 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:23:30.076131  824724 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:23:30.076210  824724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:23:30.086272  824724 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:23:30.086344  824724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:23:30.095832  824724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:23:30.111095  824724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:23:30.125725  824724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 06:23:30.140343  824724 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:23:30.144388  824724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:23:30.155306  824724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:23:30.272267  824724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:23:30.289369  824724 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:23:30.289380  824724 certs.go:195] generating shared ca certs ...
	I1210 06:23:30.289407  824724 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:30.289569  824724 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:23:30.289628  824724 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:23:30.289635  824724 certs.go:257] generating profile certs ...
	I1210 06:23:30.289702  824724 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:23:30.289713  824724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt with IP's: []
	I1210 06:23:30.577813  824724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt ...
	I1210 06:23:30.577830  824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: {Name:mk182e2de3a6255438833644eab98673931582c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:30.578053  824724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key ...
	I1210 06:23:30.578060  824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key: {Name:mk775339eb0119e3f53731683334a2bf251dfdc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:30.578160  824724 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:23:30.578171  824724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt.7cb3dc2f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 06:23:30.705071  824724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt.7cb3dc2f ...
	I1210 06:23:30.705088  824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt.7cb3dc2f: {Name:mk915863deb06984ded66016408409304916e860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:30.705280  824724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f ...
	I1210 06:23:30.705288  824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f: {Name:mk558cd656fe759f628c9df8b2c6b8157bf7257c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:30.705377  824724 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt.7cb3dc2f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt
	I1210 06:23:30.705459  824724 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key
	I1210 06:23:30.705530  824724 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:23:30.705545  824724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt with IP's: []
	I1210 06:23:30.822321  824724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt ...
	I1210 06:23:30.822338  824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt: {Name:mk49c9b79acd5b2da0c0a9e737eef381494c2c27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:30.822549  824724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key ...
	I1210 06:23:30.822557  824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key: {Name:mk1cb30bcf4253db4762bdd181e4e7acf1302f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:30.822782  824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:23:30.822829  824724 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:23:30.822837  824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:23:30.822862  824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:23:30.822891  824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:23:30.822913  824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:23:30.822957  824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:23:30.823552  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:23:30.843034  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:23:30.862309  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:23:30.881205  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:23:30.899622  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:23:30.917571  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:23:30.936094  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:23:30.954214  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:23:30.972466  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:23:30.990212  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:23:31.009829  824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:23:31.028583  824724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:23:31.041431  824724 ssh_runner.go:195] Run: openssl version
	I1210 06:23:31.050524  824724 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:23:31.058272  824724 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:23:31.066301  824724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:23:31.070888  824724 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:23:31.070945  824724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:23:31.116432  824724 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:23:31.124295  824724 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 06:23:31.132379  824724 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:23:31.141114  824724 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:23:31.149283  824724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:23:31.153297  824724 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:23:31.153355  824724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:23:31.194916  824724 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:23:31.202267  824724 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:23:31.209669  824724 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:31.217299  824724 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:23:31.225148  824724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:31.228942  824724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:31.228999  824724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:31.274881  824724 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:23:31.282499  824724 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:23:31.289942  824724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:23:31.293526  824724 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:23:31.293570  824724 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:31.293643  824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:23:31.293702  824724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:23:31.321141  824724 cri.go:89] found id: ""
	I1210 06:23:31.321205  824724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:23:31.329010  824724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:23:31.336715  824724 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:23:31.336796  824724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:23:31.344526  824724 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:23:31.344537  824724 kubeadm.go:158] found existing configuration files:
	
	I1210 06:23:31.344591  824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:23:31.352424  824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:23:31.352480  824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:23:31.359796  824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:23:31.367596  824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:23:31.367662  824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:23:31.375285  824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:23:31.383343  824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:23:31.383409  824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:23:31.390955  824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:23:31.398664  824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:23:31.398721  824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:23:31.406242  824724 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:23:31.453829  824724 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:23:31.453880  824724 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:23:31.534875  824724 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:23:31.534944  824724 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:23:31.534979  824724 kubeadm.go:319] OS: Linux
	I1210 06:23:31.535022  824724 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:23:31.535069  824724 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:23:31.535115  824724 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:23:31.535161  824724 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:23:31.535208  824724 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:23:31.535257  824724 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:23:31.535300  824724 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:23:31.535347  824724 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:23:31.535392  824724 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:23:31.602248  824724 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:23:31.602379  824724 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:23:31.602496  824724 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:23:31.610822  824724 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:23:31.617037  824724 out.go:252]   - Generating certificates and keys ...
	I1210 06:23:31.617136  824724 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:23:31.617201  824724 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:23:32.164365  824724 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:23:32.400510  824724 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:23:32.637786  824724 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:23:32.828842  824724 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:23:33.099528  824724 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:23:33.099825  824724 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 06:23:33.371416  824724 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:23:33.371713  824724 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 06:23:33.628997  824724 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:23:33.868348  824724 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:23:34.405704  824724 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:23:34.405768  824724 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:23:34.690708  824724 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:23:35.224487  824724 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:23:35.300387  824724 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:23:35.503970  824724 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:23:35.789168  824724 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:23:35.789772  824724 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:23:35.793039  824724 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:23:35.796572  824724 out.go:252]   - Booting up control plane ...
	I1210 06:23:35.796665  824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:23:35.796742  824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:23:35.797250  824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:23:35.813928  824724 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:23:35.814030  824724 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:23:35.822221  824724 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:23:35.822490  824724 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:23:35.822695  824724 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:23:35.959601  824724 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:23:35.959714  824724 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:27:35.959280  824724 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001121505s
	I1210 06:27:35.959314  824724 kubeadm.go:319] 
	I1210 06:27:35.959376  824724 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:27:35.959412  824724 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:27:35.959525  824724 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:27:35.959530  824724 kubeadm.go:319] 
	I1210 06:27:35.959645  824724 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:27:35.959682  824724 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:27:35.959718  824724 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:27:35.959721  824724 kubeadm.go:319] 
	I1210 06:27:35.964211  824724 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:27:35.964697  824724 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:27:35.964835  824724 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:27:35.965086  824724 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:27:35.965096  824724 kubeadm.go:319] 
	I1210 06:27:35.965197  824724 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:27:35.965301  824724 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001121505s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:27:35.965400  824724 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:27:36.375968  824724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:27:36.389422  824724 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:27:36.389482  824724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:27:36.397243  824724 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:27:36.397253  824724 kubeadm.go:158] found existing configuration files:
	
	I1210 06:27:36.397305  824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:27:36.405114  824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:27:36.405171  824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:27:36.412753  824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:27:36.420490  824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:27:36.420545  824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:27:36.428126  824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:27:36.436293  824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:27:36.436352  824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:27:36.443908  824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:27:36.451976  824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:27:36.452034  824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:27:36.459851  824724 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:27:36.498112  824724 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:27:36.498163  824724 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:27:36.573055  824724 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:27:36.573144  824724 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:27:36.573204  824724 kubeadm.go:319] OS: Linux
	I1210 06:27:36.573260  824724 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:27:36.573308  824724 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:27:36.573369  824724 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:27:36.573425  824724 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:27:36.573481  824724 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:27:36.573548  824724 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:27:36.573592  824724 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:27:36.573648  824724 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:27:36.573702  824724 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:27:36.643977  824724 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:27:36.644081  824724 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:27:36.644202  824724 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:27:36.651002  824724 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:27:36.656480  824724 out.go:252]   - Generating certificates and keys ...
	I1210 06:27:36.656581  824724 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:27:36.656659  824724 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:27:36.656760  824724 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:27:36.656824  824724 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:27:36.656899  824724 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:27:36.656956  824724 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:27:36.657022  824724 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:27:36.657087  824724 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:27:36.657166  824724 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:27:36.657242  824724 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:27:36.657282  824724 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:27:36.657347  824724 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:27:37.043000  824724 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:27:37.557603  824724 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:27:37.836966  824724 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:27:37.930755  824724 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:27:38.179355  824724 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:27:38.180126  824724 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:27:38.182784  824724 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:27:38.186129  824724 out.go:252]   - Booting up control plane ...
	I1210 06:27:38.186236  824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:27:38.186313  824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:27:38.186379  824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:27:38.206601  824724 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:27:38.206701  824724 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:27:38.214027  824724 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:27:38.214319  824724 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:27:38.214521  824724 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:27:38.356052  824724 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:27:38.356165  824724 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:31:38.356742  824724 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00106645s
	I1210 06:31:38.356763  824724 kubeadm.go:319] 
	I1210 06:31:38.356817  824724 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:31:38.356847  824724 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:31:38.357052  824724 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:31:38.357057  824724 kubeadm.go:319] 
	I1210 06:31:38.357161  824724 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:31:38.357190  824724 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:31:38.357219  824724 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:31:38.357221  824724 kubeadm.go:319] 
	I1210 06:31:38.361679  824724 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:31:38.362132  824724 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:31:38.362236  824724 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:31:38.362471  824724 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:31:38.362478  824724 kubeadm.go:319] 
	I1210 06:31:38.362566  824724 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:31:38.362606  824724 kubeadm.go:403] duration metric: took 8m7.069039681s to StartCluster
	I1210 06:31:38.362655  824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:31:38.362721  824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:31:38.387152  824724 cri.go:89] found id: ""
	I1210 06:31:38.387182  824724 logs.go:282] 0 containers: []
	W1210 06:31:38.387188  824724 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:31:38.387193  824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:31:38.387251  824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:31:38.411100  824724 cri.go:89] found id: ""
	I1210 06:31:38.411115  824724 logs.go:282] 0 containers: []
	W1210 06:31:38.411121  824724 logs.go:284] No container was found matching "etcd"
	I1210 06:31:38.411126  824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:31:38.411184  824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:31:38.435309  824724 cri.go:89] found id: ""
	I1210 06:31:38.435322  824724 logs.go:282] 0 containers: []
	W1210 06:31:38.435329  824724 logs.go:284] No container was found matching "coredns"
	I1210 06:31:38.435334  824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:31:38.435399  824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:31:38.460192  824724 cri.go:89] found id: ""
	I1210 06:31:38.460205  824724 logs.go:282] 0 containers: []
	W1210 06:31:38.460212  824724 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:31:38.460217  824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:31:38.460276  824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:31:38.484680  824724 cri.go:89] found id: ""
	I1210 06:31:38.484695  824724 logs.go:282] 0 containers: []
	W1210 06:31:38.484701  824724 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:31:38.484706  824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:31:38.484766  824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:31:38.508595  824724 cri.go:89] found id: ""
	I1210 06:31:38.508608  824724 logs.go:282] 0 containers: []
	W1210 06:31:38.508615  824724 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:31:38.508621  824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:31:38.508680  824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:31:38.534537  824724 cri.go:89] found id: ""
	I1210 06:31:38.534551  824724 logs.go:282] 0 containers: []
	W1210 06:31:38.534558  824724 logs.go:284] No container was found matching "kindnet"
	I1210 06:31:38.534567  824724 logs.go:123] Gathering logs for dmesg ...
	I1210 06:31:38.534579  824724 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:31:38.551392  824724 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:31:38.551409  824724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:31:38.618849  824724 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:31:38.605890    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:38.611125    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:38.611837    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:38.613473    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:38.613796    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:31:38.605890    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:38.611125    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:38.611837    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:38.613473    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:38.613796    4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:31:38.618860  824724 logs.go:123] Gathering logs for containerd ...
	I1210 06:31:38.618873  824724 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:31:38.659217  824724 logs.go:123] Gathering logs for container status ...
	I1210 06:31:38.659242  824724 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:31:38.686990  824724 logs.go:123] Gathering logs for kubelet ...
	I1210 06:31:38.687005  824724 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 06:31:38.744286  824724 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00106645s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:31:38.744331  824724 out.go:285] * 
	W1210 06:31:38.744399  824724 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00106645s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:31:38.744421  824724 out.go:285] * 
	W1210 06:31:38.746553  824724 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:31:38.752408  824724 out.go:203] 
	W1210 06:31:38.755993  824724 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00106645s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:31:38.756041  824724 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:31:38.756061  824724 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:31:38.759668  824724 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804018086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804089931Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804203434Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804280793Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804381914Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804455885Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804517473Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804616461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804709295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804815496Z" level=info msg="Connect containerd service"
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.806335967Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.807027307Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.816456518Z" level=info msg="Start subscribing containerd event"
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.816675745Z" level=info msg="Start recovering state"
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.816681612Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.816908003Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854717261Z" level=info msg="Start event monitor"
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854769307Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854779908Z" level=info msg="Start streaming server"
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854789911Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854798346Z" level=info msg="runtime interface starting up..."
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854804549Z" level=info msg="starting plugins..."
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854816651Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:23:29 functional-534748 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.858775013Z" level=info msg="containerd successfully booted in 0.076971s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:31:39.733433    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:39.733867    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:39.735362    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:39.735759    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:31:39.737222    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:31:39 up  5:13,  0 user,  load average: 0.12, 0.47, 1.07
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:31:36 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:31:37 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 10 06:31:37 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:31:37 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:31:37 functional-534748 kubelet[4699]: E1210 06:31:37.337790    4699 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:31:37 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:31:37 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 06:31:38 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:31:38 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:31:38 functional-534748 kubelet[4704]: E1210 06:31:38.090975    4704 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 06:31:38 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:31:38 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:31:38 functional-534748 kubelet[4791]: E1210 06:31:38.865486    4791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:31:39 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 06:31:39 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:31:39 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:31:39 functional-534748 kubelet[4855]: E1210 06:31:39.619203    4855 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:31:39 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:31:39 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 6 (358.166138ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:31:40.215114  830486 status.go:458] kubeconfig endpoint: get endpoint: "functional-534748" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1210 06:31:40.230446  786751 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-534748 --alsologtostderr -v=8
E1210 06:32:35.782962  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:33:03.488480  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:35:14.424316  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:36:37.487194  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:37:35.782520  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-534748 --alsologtostderr -v=8: exit status 80 (6m5.848425107s)

                                                
                                                
-- stdout --
	* [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:31:40.279311  830558 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:31:40.279505  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279534  830558 out.go:374] Setting ErrFile to fd 2...
	I1210 06:31:40.279556  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279849  830558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:31:40.280242  830558 out.go:368] Setting JSON to false
	I1210 06:31:40.281164  830558 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18825,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:31:40.281259  830558 start.go:143] virtualization:  
	I1210 06:31:40.284710  830558 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:31:40.288411  830558 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:31:40.288473  830558 notify.go:221] Checking for updates...
	I1210 06:31:40.295121  830558 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:31:40.302607  830558 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:40.305522  830558 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:31:40.308355  830558 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:31:40.311698  830558 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:31:40.315095  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:40.315199  830558 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:31:40.353797  830558 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:31:40.353929  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.415859  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.405265704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.415979  830558 docker.go:319] overlay module found
	I1210 06:31:40.419085  830558 out.go:179] * Using the docker driver based on existing profile
	I1210 06:31:40.421970  830558 start.go:309] selected driver: docker
	I1210 06:31:40.421991  830558 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.422101  830558 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:31:40.422196  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.479216  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.46865578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.479663  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:40.479723  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:40.479768  830558 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.482983  830558 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:31:40.485814  830558 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:31:40.488782  830558 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:31:40.491625  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:40.491676  830558 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:31:40.491687  830558 cache.go:65] Caching tarball of preloaded images
	I1210 06:31:40.491736  830558 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:31:40.491792  830558 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:31:40.491804  830558 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:31:40.491917  830558 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:31:40.511808  830558 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:31:40.511830  830558 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:31:40.511847  830558 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:31:40.511881  830558 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:31:40.511943  830558 start.go:364] duration metric: took 39.41µs to acquireMachinesLock for "functional-534748"
	I1210 06:31:40.511975  830558 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:31:40.511985  830558 fix.go:54] fixHost starting: 
	I1210 06:31:40.512241  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:40.529256  830558 fix.go:112] recreateIfNeeded on functional-534748: state=Running err=<nil>
	W1210 06:31:40.529298  830558 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:31:40.532448  830558 out.go:252] * Updating the running docker "functional-534748" container ...
	I1210 06:31:40.532488  830558 machine.go:94] provisionDockerMachine start ...
	I1210 06:31:40.532584  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.550188  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.550543  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.550560  830558 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:31:40.681995  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.682020  830558 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:31:40.682096  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.699737  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.700054  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.700072  830558 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:31:40.843977  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.844083  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.862627  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.862951  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.862975  830558 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:31:40.999052  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:31:40.999087  830558 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:31:40.999116  830558 ubuntu.go:190] setting up certificates
	I1210 06:31:40.999127  830558 provision.go:84] configureAuth start
	I1210 06:31:40.999208  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.018099  830558 provision.go:143] copyHostCerts
	I1210 06:31:41.018148  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018188  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:31:41.018200  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018276  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:31:41.018376  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018397  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:31:41.018412  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018442  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:31:41.018539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018565  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:31:41.018570  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018598  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:31:41.018664  830558 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:31:41.416959  830558 provision.go:177] copyRemoteCerts
	I1210 06:31:41.417039  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:31:41.417085  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.434643  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.530263  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:31:41.530324  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:31:41.547539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:31:41.547601  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:31:41.565054  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:31:41.565115  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:31:41.582586  830558 provision.go:87] duration metric: took 583.43959ms to configureAuth
	I1210 06:31:41.582635  830558 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:31:41.582823  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:41.582837  830558 machine.go:97] duration metric: took 1.050342086s to provisionDockerMachine
	I1210 06:31:41.582845  830558 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:31:41.582857  830558 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:31:41.582912  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:31:41.582957  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.603404  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.698354  830558 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:31:41.701779  830558 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:31:41.701843  830558 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:31:41.701865  830558 command_runner.go:130] > VERSION_ID="12"
	I1210 06:31:41.701877  830558 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:31:41.701883  830558 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:31:41.701887  830558 command_runner.go:130] > ID=debian
	I1210 06:31:41.701891  830558 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:31:41.701896  830558 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:31:41.701906  830558 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:31:41.701968  830558 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:31:41.702000  830558 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:31:41.702014  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:31:41.702084  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:31:41.702172  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:31:41.702185  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /etc/ssl/certs/7867512.pem
	I1210 06:31:41.702261  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:31:41.702269  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> /etc/test/nested/copy/786751/hosts
	I1210 06:31:41.702315  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:31:41.709991  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:41.727898  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:31:41.745651  830558 start.go:296] duration metric: took 162.79042ms for postStartSetup
	I1210 06:31:41.745798  830558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:31:41.745866  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.763287  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.863262  830558 command_runner.go:130] > 19%
	I1210 06:31:41.863843  830558 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:31:41.868394  830558 command_runner.go:130] > 159G
	I1210 06:31:41.868719  830558 fix.go:56] duration metric: took 1.356728705s for fixHost
	I1210 06:31:41.868739  830558 start.go:83] releasing machines lock for "functional-534748", held for 1.35678464s
	I1210 06:31:41.868810  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.887031  830558 ssh_runner.go:195] Run: cat /version.json
	I1210 06:31:41.887084  830558 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:31:41.887092  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.887143  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.906606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.920523  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:42.095537  830558 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:31:42.095667  830558 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 06:31:42.095846  830558 ssh_runner.go:195] Run: systemctl --version
	I1210 06:31:42.103080  830558 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:31:42.103120  830558 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:31:42.103532  830558 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:31:42.109223  830558 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:31:42.109308  830558 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:31:42.109410  830558 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:31:42.119226  830558 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:31:42.119255  830558 start.go:496] detecting cgroup driver to use...
	I1210 06:31:42.119293  830558 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:31:42.119365  830558 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:31:42.140472  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:31:42.156795  830558 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:31:42.156872  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:31:42.175919  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:31:42.191679  830558 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:31:42.319538  830558 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:31:42.438460  830558 docker.go:234] disabling docker service ...
	I1210 06:31:42.438580  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:31:42.456224  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:31:42.471442  830558 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:31:42.599250  830558 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:31:42.716867  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:31:42.729172  830558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:31:42.742342  830558 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 06:31:42.743581  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:31:42.752861  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:31:42.762203  830558 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:31:42.762278  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:31:42.771751  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.780168  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:31:42.788652  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.797230  830558 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:31:42.805633  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:31:42.814368  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:31:42.823074  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:31:42.832256  830558 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:31:42.839109  830558 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:31:42.840076  830558 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:31:42.847676  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:42.968893  830558 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:31:43.099901  830558 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:31:43.099974  830558 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:31:43.103852  830558 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 06:31:43.103874  830558 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:31:43.103881  830558 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1210 06:31:43.103888  830558 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:43.103903  830558 command_runner.go:130] > Access: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103913  830558 command_runner.go:130] > Modify: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103919  830558 command_runner.go:130] > Change: 2025-12-10 06:31:43.062873060 +0000
	I1210 06:31:43.103925  830558 command_runner.go:130] >  Birth: -
	I1210 06:31:43.103951  830558 start.go:564] Will wait 60s for crictl version
	I1210 06:31:43.104009  830558 ssh_runner.go:195] Run: which crictl
	I1210 06:31:43.107381  830558 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:31:43.107477  830558 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:31:43.129358  830558 command_runner.go:130] > Version:  0.1.0
	I1210 06:31:43.129383  830558 command_runner.go:130] > RuntimeName:  containerd
	I1210 06:31:43.129392  830558 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 06:31:43.129396  830558 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:31:43.131610  830558 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:31:43.131682  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.151833  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.153818  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.172831  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.180465  830558 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:31:43.183314  830558 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:31:43.199081  830558 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:31:43.202971  830558 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:31:43.203147  830558 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:31:43.203272  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:43.203351  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.227955  830558 command_runner.go:130] > {
	I1210 06:31:43.227978  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.227982  830558 command_runner.go:130] >     {
	I1210 06:31:43.227991  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.227996  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228002  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.228005  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228009  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228020  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.228023  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228028  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.228032  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228036  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228040  830558 command_runner.go:130] >     },
	I1210 06:31:43.228044  830558 command_runner.go:130] >     {
	I1210 06:31:43.228052  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.228056  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228061  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.228066  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228082  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228094  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.228097  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228102  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.228108  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228112  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228117  830558 command_runner.go:130] >     },
	I1210 06:31:43.228121  830558 command_runner.go:130] >     {
	I1210 06:31:43.228128  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.228135  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228141  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.228153  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228160  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228168  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.228174  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228178  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.228182  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.228186  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228191  830558 command_runner.go:130] >     },
	I1210 06:31:43.228195  830558 command_runner.go:130] >     {
	I1210 06:31:43.228204  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.228208  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228215  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.228219  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228225  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228233  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.228239  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228243  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.228247  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228250  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228254  830558 command_runner.go:130] >       },
	I1210 06:31:43.228258  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228264  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228272  830558 command_runner.go:130] >     },
	I1210 06:31:43.228279  830558 command_runner.go:130] >     {
	I1210 06:31:43.228286  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.228290  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228295  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.228299  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228303  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228313  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.228317  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228321  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.228331  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228340  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228350  830558 command_runner.go:130] >       },
	I1210 06:31:43.228354  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228357  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228361  830558 command_runner.go:130] >     },
	I1210 06:31:43.228364  830558 command_runner.go:130] >     {
	I1210 06:31:43.228371  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.228384  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228390  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.228394  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228398  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228406  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.228412  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228416  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.228420  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228424  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228427  830558 command_runner.go:130] >       },
	I1210 06:31:43.228438  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228443  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228445  830558 command_runner.go:130] >     },
	I1210 06:31:43.228448  830558 command_runner.go:130] >     {
	I1210 06:31:43.228455  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.228463  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228471  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.228475  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228479  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228487  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.228493  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228497  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.228502  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228512  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228515  830558 command_runner.go:130] >     },
	I1210 06:31:43.228518  830558 command_runner.go:130] >     {
	I1210 06:31:43.228525  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.228530  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228538  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.228542  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228546  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228557  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.228566  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228573  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.228577  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228580  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228584  830558 command_runner.go:130] >       },
	I1210 06:31:43.228594  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228598  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228601  830558 command_runner.go:130] >     },
	I1210 06:31:43.228604  830558 command_runner.go:130] >     {
	I1210 06:31:43.228611  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.228617  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228621  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.228627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228631  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228641  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.228647  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228655  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.228659  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228669  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.228673  830558 command_runner.go:130] >       },
	I1210 06:31:43.228677  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228681  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.228686  830558 command_runner.go:130] >     }
	I1210 06:31:43.228689  830558 command_runner.go:130] >   ]
	I1210 06:31:43.228692  830558 command_runner.go:130] > }
	I1210 06:31:43.228843  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.228853  830558 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:31:43.228913  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.254390  830558 command_runner.go:130] > {
	I1210 06:31:43.254411  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.254415  830558 command_runner.go:130] >     {
	I1210 06:31:43.254424  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.254430  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254435  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.254440  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254444  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254453  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.254460  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254488  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.254495  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254499  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254508  830558 command_runner.go:130] >     },
	I1210 06:31:43.254512  830558 command_runner.go:130] >     {
	I1210 06:31:43.254527  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.254534  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254540  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.254543  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254547  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254556  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.254576  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254581  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.254585  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254589  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254600  830558 command_runner.go:130] >     },
	I1210 06:31:43.254603  830558 command_runner.go:130] >     {
	I1210 06:31:43.254609  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.254619  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254624  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.254627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254638  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254649  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.254661  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254665  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.254669  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.254673  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254677  830558 command_runner.go:130] >     },
	I1210 06:31:43.254680  830558 command_runner.go:130] >     {
	I1210 06:31:43.254694  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.254698  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254703  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.254706  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254710  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254721  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.254725  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254729  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.254735  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254739  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254744  830558 command_runner.go:130] >       },
	I1210 06:31:43.254749  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254753  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254765  830558 command_runner.go:130] >     },
	I1210 06:31:43.254768  830558 command_runner.go:130] >     {
	I1210 06:31:43.254779  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.254786  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254791  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.254795  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254798  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254806  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.254810  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254816  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.254820  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254831  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254835  830558 command_runner.go:130] >       },
	I1210 06:31:43.254843  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254850  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254853  830558 command_runner.go:130] >     },
	I1210 06:31:43.254860  830558 command_runner.go:130] >     {
	I1210 06:31:43.254867  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.254873  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254879  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.254882  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254886  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254894  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.254897  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254901  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.254907  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254911  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254916  830558 command_runner.go:130] >       },
	I1210 06:31:43.254920  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254926  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254929  830558 command_runner.go:130] >     },
	I1210 06:31:43.254932  830558 command_runner.go:130] >     {
	I1210 06:31:43.254939  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.254945  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254951  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.254958  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254962  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254970  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.254975  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254979  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.254982  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254987  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254992  830558 command_runner.go:130] >     },
	I1210 06:31:43.254995  830558 command_runner.go:130] >     {
	I1210 06:31:43.255004  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.255008  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255022  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.255026  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255030  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255038  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.255044  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255048  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.255051  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255055  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.255058  830558 command_runner.go:130] >       },
	I1210 06:31:43.255061  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255065  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.255069  830558 command_runner.go:130] >     },
	I1210 06:31:43.255072  830558 command_runner.go:130] >     {
	I1210 06:31:43.255081  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.255088  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255093  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.255098  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255102  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255109  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.255112  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255116  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.255122  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255129  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.255136  830558 command_runner.go:130] >       },
	I1210 06:31:43.255140  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255143  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.255147  830558 command_runner.go:130] >     }
	I1210 06:31:43.255150  830558 command_runner.go:130] >   ]
	I1210 06:31:43.255153  830558 command_runner.go:130] > }
	I1210 06:31:43.257476  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.257497  830558 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:31:43.257505  830558 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:31:43.257607  830558 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:31:43.257674  830558 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:31:43.280486  830558 command_runner.go:130] > {
	I1210 06:31:43.280508  830558 command_runner.go:130] >   "cniconfig": {
	I1210 06:31:43.280515  830558 command_runner.go:130] >     "Networks": [
	I1210 06:31:43.280519  830558 command_runner.go:130] >       {
	I1210 06:31:43.280525  830558 command_runner.go:130] >         "Config": {
	I1210 06:31:43.280531  830558 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 06:31:43.280536  830558 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 06:31:43.280541  830558 command_runner.go:130] >           "Plugins": [
	I1210 06:31:43.280545  830558 command_runner.go:130] >             {
	I1210 06:31:43.280549  830558 command_runner.go:130] >               "Network": {
	I1210 06:31:43.280553  830558 command_runner.go:130] >                 "ipam": {},
	I1210 06:31:43.280572  830558 command_runner.go:130] >                 "type": "loopback"
	I1210 06:31:43.280586  830558 command_runner.go:130] >               },
	I1210 06:31:43.280593  830558 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 06:31:43.280596  830558 command_runner.go:130] >             }
	I1210 06:31:43.280600  830558 command_runner.go:130] >           ],
	I1210 06:31:43.280614  830558 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 06:31:43.280625  830558 command_runner.go:130] >         },
	I1210 06:31:43.280630  830558 command_runner.go:130] >         "IFName": "lo"
	I1210 06:31:43.280633  830558 command_runner.go:130] >       }
	I1210 06:31:43.280637  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280642  830558 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 06:31:43.280652  830558 command_runner.go:130] >     "PluginDirs": [
	I1210 06:31:43.280656  830558 command_runner.go:130] >       "/opt/cni/bin"
	I1210 06:31:43.280660  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280671  830558 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 06:31:43.280679  830558 command_runner.go:130] >     "Prefix": "eth"
	I1210 06:31:43.280682  830558 command_runner.go:130] >   },
	I1210 06:31:43.280686  830558 command_runner.go:130] >   "config": {
	I1210 06:31:43.280693  830558 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 06:31:43.280699  830558 command_runner.go:130] >       "/etc/cdi",
	I1210 06:31:43.280705  830558 command_runner.go:130] >       "/var/run/cdi"
	I1210 06:31:43.280710  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280714  830558 command_runner.go:130] >     "cni": {
	I1210 06:31:43.280725  830558 command_runner.go:130] >       "binDir": "",
	I1210 06:31:43.280729  830558 command_runner.go:130] >       "binDirs": [
	I1210 06:31:43.280732  830558 command_runner.go:130] >         "/opt/cni/bin"
	I1210 06:31:43.280736  830558 command_runner.go:130] >       ],
	I1210 06:31:43.280740  830558 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 06:31:43.280744  830558 command_runner.go:130] >       "confTemplate": "",
	I1210 06:31:43.280747  830558 command_runner.go:130] >       "ipPref": "",
	I1210 06:31:43.280751  830558 command_runner.go:130] >       "maxConfNum": 1,
	I1210 06:31:43.280755  830558 command_runner.go:130] >       "setupSerially": false,
	I1210 06:31:43.280759  830558 command_runner.go:130] >       "useInternalLoopback": false
	I1210 06:31:43.280762  830558 command_runner.go:130] >     },
	I1210 06:31:43.280768  830558 command_runner.go:130] >     "containerd": {
	I1210 06:31:43.280772  830558 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 06:31:43.280776  830558 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 06:31:43.280781  830558 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 06:31:43.280789  830558 command_runner.go:130] >       "runtimes": {
	I1210 06:31:43.280793  830558 command_runner.go:130] >         "runc": {
	I1210 06:31:43.280797  830558 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 06:31:43.280802  830558 command_runner.go:130] >           "PodAnnotations": null,
	I1210 06:31:43.280806  830558 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 06:31:43.280811  830558 command_runner.go:130] >           "cgroupWritable": false,
	I1210 06:31:43.280814  830558 command_runner.go:130] >           "cniConfDir": "",
	I1210 06:31:43.280818  830558 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 06:31:43.280822  830558 command_runner.go:130] >           "io_type": "",
	I1210 06:31:43.280827  830558 command_runner.go:130] >           "options": {
	I1210 06:31:43.280838  830558 command_runner.go:130] >             "BinaryName": "",
	I1210 06:31:43.280850  830558 command_runner.go:130] >             "CriuImagePath": "",
	I1210 06:31:43.280854  830558 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 06:31:43.280858  830558 command_runner.go:130] >             "IoGid": 0,
	I1210 06:31:43.280862  830558 command_runner.go:130] >             "IoUid": 0,
	I1210 06:31:43.280866  830558 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 06:31:43.280872  830558 command_runner.go:130] >             "Root": "",
	I1210 06:31:43.280877  830558 command_runner.go:130] >             "ShimCgroup": "",
	I1210 06:31:43.280883  830558 command_runner.go:130] >             "SystemdCgroup": false
	I1210 06:31:43.280887  830558 command_runner.go:130] >           },
	I1210 06:31:43.280892  830558 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 06:31:43.280898  830558 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 06:31:43.280902  830558 command_runner.go:130] >           "runtimePath": "",
	I1210 06:31:43.280907  830558 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 06:31:43.280912  830558 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 06:31:43.280918  830558 command_runner.go:130] >           "snapshotter": ""
	I1210 06:31:43.280921  830558 command_runner.go:130] >         }
	I1210 06:31:43.280925  830558 command_runner.go:130] >       }
	I1210 06:31:43.280930  830558 command_runner.go:130] >     },
	I1210 06:31:43.280941  830558 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 06:31:43.280949  830558 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 06:31:43.280959  830558 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 06:31:43.280965  830558 command_runner.go:130] >     "disableApparmor": false,
	I1210 06:31:43.280970  830558 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 06:31:43.280976  830558 command_runner.go:130] >     "disableProcMount": false,
	I1210 06:31:43.280983  830558 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 06:31:43.280986  830558 command_runner.go:130] >     "enableCDI": true,
	I1210 06:31:43.280991  830558 command_runner.go:130] >     "enableSelinux": false,
	I1210 06:31:43.280995  830558 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 06:31:43.281002  830558 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 06:31:43.281009  830558 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 06:31:43.281014  830558 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 06:31:43.281021  830558 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 06:31:43.281029  830558 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 06:31:43.281034  830558 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 06:31:43.281040  830558 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281047  830558 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 06:31:43.281052  830558 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281057  830558 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 06:31:43.281062  830558 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 06:31:43.281067  830558 command_runner.go:130] >   },
	I1210 06:31:43.281071  830558 command_runner.go:130] >   "features": {
	I1210 06:31:43.281076  830558 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 06:31:43.281079  830558 command_runner.go:130] >   },
	I1210 06:31:43.281083  830558 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 06:31:43.281095  830558 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281107  830558 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281111  830558 command_runner.go:130] >   "runtimeHandlers": [
	I1210 06:31:43.281114  830558 command_runner.go:130] >     {
	I1210 06:31:43.281118  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281129  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281134  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281137  830558 command_runner.go:130] >       }
	I1210 06:31:43.281142  830558 command_runner.go:130] >     },
	I1210 06:31:43.281145  830558 command_runner.go:130] >     {
	I1210 06:31:43.281148  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281153  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281158  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281161  830558 command_runner.go:130] >       },
	I1210 06:31:43.281168  830558 command_runner.go:130] >       "name": "runc"
	I1210 06:31:43.281171  830558 command_runner.go:130] >     }
	I1210 06:31:43.281174  830558 command_runner.go:130] >   ],
	I1210 06:31:43.281178  830558 command_runner.go:130] >   "status": {
	I1210 06:31:43.281183  830558 command_runner.go:130] >     "conditions": [
	I1210 06:31:43.281186  830558 command_runner.go:130] >       {
	I1210 06:31:43.281190  830558 command_runner.go:130] >         "message": "",
	I1210 06:31:43.281205  830558 command_runner.go:130] >         "reason": "",
	I1210 06:31:43.281209  830558 command_runner.go:130] >         "status": true,
	I1210 06:31:43.281214  830558 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 06:31:43.281220  830558 command_runner.go:130] >       },
	I1210 06:31:43.281224  830558 command_runner.go:130] >       {
	I1210 06:31:43.281230  830558 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 06:31:43.281235  830558 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 06:31:43.281239  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281243  830558 command_runner.go:130] >         "type": "NetworkReady"
	I1210 06:31:43.281246  830558 command_runner.go:130] >       },
	I1210 06:31:43.281249  830558 command_runner.go:130] >       {
	I1210 06:31:43.281271  830558 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 06:31:43.281280  830558 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 06:31:43.281286  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281292  830558 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 06:31:43.281298  830558 command_runner.go:130] >       }
	I1210 06:31:43.281301  830558 command_runner.go:130] >     ]
	I1210 06:31:43.281304  830558 command_runner.go:130] >   }
	I1210 06:31:43.281308  830558 command_runner.go:130] > }
	I1210 06:31:43.283879  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:43.283902  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:43.283924  830558 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:31:43.283950  830558 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:31:43.284076  830558 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:31:43.284154  830558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:31:43.290942  830558 command_runner.go:130] > kubeadm
	I1210 06:31:43.290962  830558 command_runner.go:130] > kubectl
	I1210 06:31:43.290967  830558 command_runner.go:130] > kubelet
	I1210 06:31:43.291913  830558 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:31:43.292013  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:31:43.299680  830558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:31:43.314082  830558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:31:43.330260  830558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 06:31:43.347625  830558 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:31:43.352127  830558 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:31:43.352925  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:43.471703  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:44.297320  830558 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:31:44.297353  830558 certs.go:195] generating shared ca certs ...
	I1210 06:31:44.297370  830558 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:44.297565  830558 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:31:44.297620  830558 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:31:44.297640  830558 certs.go:257] generating profile certs ...
	I1210 06:31:44.297767  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:31:44.297844  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:31:44.297905  830558 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:31:44.297923  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:31:44.297952  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:31:44.297969  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:31:44.297986  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:31:44.297997  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:31:44.298022  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:31:44.298036  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:31:44.298051  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:31:44.298107  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:31:44.298147  830558 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:31:44.298160  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:31:44.298194  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:31:44.298223  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:31:44.298262  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:31:44.298323  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:44.298363  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem -> /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.298380  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.298399  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.299062  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:31:44.319985  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:31:44.339121  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:31:44.360050  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:31:44.381013  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:31:44.398560  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:31:44.416157  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:31:44.433967  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:31:44.452197  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:31:44.470088  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:31:44.487844  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:31:44.505551  830558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:31:44.518440  830558 ssh_runner.go:195] Run: openssl version
	I1210 06:31:44.524638  830558 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:31:44.525053  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.532466  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:31:44.539857  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543663  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543696  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543746  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.585800  830558 command_runner.go:130] > 51391683
	I1210 06:31:44.586242  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:31:44.594754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.602172  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:31:44.609494  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613294  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613412  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613500  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.654003  830558 command_runner.go:130] > 3ec20f2e
	I1210 06:31:44.654513  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:31:44.661754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.668842  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:31:44.676441  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680175  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680286  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680373  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.725770  830558 command_runner.go:130] > b5213941
	I1210 06:31:44.726319  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:31:44.734095  830558 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737911  830558 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737986  830558 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:31:44.737999  830558 command_runner.go:130] > Device: 259,1	Inode: 1050653     Links: 1
	I1210 06:31:44.738007  830558 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:44.738013  830558 command_runner.go:130] > Access: 2025-12-10 06:27:36.644508596 +0000
	I1210 06:31:44.738018  830558 command_runner.go:130] > Modify: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738023  830558 command_runner.go:130] > Change: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738028  830558 command_runner.go:130] >  Birth: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738118  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:31:44.779233  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.779410  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:31:44.820004  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.820457  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:31:44.860741  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.861258  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:31:44.902039  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.902514  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:31:44.943742  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.944234  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:31:44.986027  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.986500  830558 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:44.986586  830558 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:31:44.986679  830558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:31:45.063121  830558 cri.go:89] found id: ""
	I1210 06:31:45.063216  830558 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:31:45.099783  830558 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:31:45.099866  830558 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:31:45.099891  830558 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:31:45.101399  830558 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:31:45.101477  830558 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:31:45.101575  830558 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:31:45.115892  830558 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:31:45.116487  830558 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-534748" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.116718  830558 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "functional-534748" cluster setting kubeconfig missing "functional-534748" context setting]
	I1210 06:31:45.117177  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.117949  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.118213  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.118984  830558 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:31:45.119085  830558 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:31:45.119134  830558 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:31:45.119161  830558 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:31:45.119217  830558 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:31:45.119055  830558 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:31:45.119702  830558 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:31:45.137495  830558 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:31:45.137534  830558 kubeadm.go:602] duration metric: took 36.034287ms to restartPrimaryControlPlane
	I1210 06:31:45.137546  830558 kubeadm.go:403] duration metric: took 151.054854ms to StartCluster
	I1210 06:31:45.137576  830558 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.137653  830558 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.138311  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.138643  830558 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:31:45.139043  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:45.139108  830558 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:31:45.139177  830558 addons.go:70] Setting storage-provisioner=true in profile "functional-534748"
	I1210 06:31:45.139193  830558 addons.go:239] Setting addon storage-provisioner=true in "functional-534748"
	I1210 06:31:45.139221  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.139239  830558 addons.go:70] Setting default-storageclass=true in profile "functional-534748"
	I1210 06:31:45.139259  830558 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-534748"
	I1210 06:31:45.139583  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.139701  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.145574  830558 out.go:179] * Verifying Kubernetes components...
	I1210 06:31:45.148690  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:45.190248  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.190435  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.190756  830558 addons.go:239] Setting addon default-storageclass=true in "functional-534748"
	I1210 06:31:45.190791  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.192137  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.207281  830558 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:31:45.210256  830558 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.210285  830558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:31:45.210364  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.229978  830558 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:45.230080  830558 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:31:45.230235  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.286606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.319378  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.390267  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:45.420552  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.445487  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.049742  830558 node_ready.go:35] waiting up to 6m0s for node "functional-534748" to be "Ready" ...
	I1210 06:31:46.049893  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.049953  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.050234  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050272  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050293  830558 retry.go:31] will retry after 223.621304ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050345  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050359  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050366  830558 retry.go:31] will retry after 336.04204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050483  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.274791  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.331904  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.335903  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.335940  830558 retry.go:31] will retry after 342.637774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.387178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.449259  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.449297  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.449332  830558 retry.go:31] will retry after 384.971387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.550591  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.550669  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.551072  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.679392  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.735005  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.738824  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.738907  830558 retry.go:31] will retry after 477.156435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.835016  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.898535  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.902447  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.902505  830558 retry.go:31] will retry after 587.076477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.050787  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.051147  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.216664  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:47.275932  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.275982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.276003  830558 retry.go:31] will retry after 1.079016213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.490360  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:47.550012  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.551946  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.551982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.552018  830558 retry.go:31] will retry after 1.089774327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.050900  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.051018  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.051381  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.051446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.355639  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:48.413382  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.416787  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.416855  830558 retry.go:31] will retry after 1.248652089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.642762  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:48.712914  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.712955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.712975  830558 retry.go:31] will retry after 929.620731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.050356  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.050675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.550083  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.643743  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:49.666178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:49.715961  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.724279  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.724309  830558 retry.go:31] will retry after 2.037720794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735770  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.735805  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735824  830558 retry.go:31] will retry after 1.943919735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:50.050051  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.050130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.550100  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.550171  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.050020  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.050456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.550105  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.550181  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.550525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.680862  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:51.745585  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.745620  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.745639  830558 retry.go:31] will retry after 2.112684099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.762814  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:51.821569  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.825567  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.825603  830558 retry.go:31] will retry after 2.699110245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:52.050957  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.051054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.051439  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.550045  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.050176  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.050253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.050635  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:53.050697  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.550816  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.551250  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.858630  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:53.918073  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:53.921869  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:53.921905  830558 retry.go:31] will retry after 2.635687612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.525086  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:54.550579  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.550656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.550932  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.585338  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:54.588955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.588990  830558 retry.go:31] will retry after 2.164216453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:55.050098  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.551055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.551113  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:56.050733  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.050815  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.051188  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.549910  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.550302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.558696  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:56.634154  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.634201  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.634222  830558 retry.go:31] will retry after 5.842380515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.753466  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:56.822332  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.822371  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.822391  830558 retry.go:31] will retry after 4.388036914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:57.050861  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.050942  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.051261  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.550079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.049946  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.050302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:58.050362  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:58.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.550513  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.050184  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.050262  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.050626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.550077  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:00.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.550903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.551281  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.050843  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.051196  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.210631  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:01.270135  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:01.273736  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.273765  830558 retry.go:31] will retry after 7.330909522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.550049  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.050246  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.050347  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.050768  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.477366  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:02.540275  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:02.540316  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.540336  830558 retry.go:31] will retry after 13.941322707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.050685  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.050764  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.051097  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.550804  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.550886  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.551211  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.051225  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:04.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.050150  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.050229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.050552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.550574  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.550641  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.550922  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.050749  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.050829  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.051208  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:06.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:06.549940  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.050725  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.050985  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.550782  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.550862  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.551221  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.051376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:08.051435  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:08.550082  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.605823  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:08.661807  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:08.666022  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:08.666054  830558 retry.go:31] will retry after 18.459732711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:09.050632  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.050712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.051043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.550857  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.050543  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.050622  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.050913  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.550123  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.550201  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.550566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:10.550627  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:11.050158  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.050241  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.550370  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.050064  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.550550  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.050834  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.050904  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:13.051271  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:13.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.050138  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.050575  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.550278  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.550375  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.550721  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.050080  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.050169  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.050590  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.550609  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.550687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.551021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:15.551080  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:16.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.050991  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.482787  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:16.542663  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:16.546278  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.546307  830558 retry.go:31] will retry after 7.242230365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.550430  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.550511  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.550807  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.051138  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.550461  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.550553  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.550825  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.050619  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.050699  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.051034  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:18.051091  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:18.550728  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.550817  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.551143  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.050890  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.050958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.051259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.550945  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.551021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.551375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.050449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.549971  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.550340  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:20.550389  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:21.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.550111  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.550187  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.050899  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.050974  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.051306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.550116  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.550195  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.550553  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:22.550614  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:23.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.050118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.050459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.550009  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.550297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.788809  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:23.847955  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:23.851833  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:23.851867  830558 retry.go:31] will retry after 12.516286884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:24.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.050142  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.550248  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.550322  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.550678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.550736  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:25.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.050546  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.550682  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.550758  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.551068  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.050934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.051011  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.051351  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.549946  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.050019  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.050429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:27.050507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:27.126908  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:27.191358  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:27.191398  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.191417  830558 retry.go:31] will retry after 11.065094951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.550078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.050147  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.050242  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.550207  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.550541  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.050535  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:29.050590  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.550933  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.551212  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.550493  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.050667  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.050939  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.050993  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.550742  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.550827  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.551169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.050826  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.050910  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.051237  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.549938  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.550264  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.050091  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.550173  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.550258  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.550581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.550638  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:34.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.050330  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.550070  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.550540  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.050253  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.050340  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.550817  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.551259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.551320  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.049997  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.050415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.369119  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:36.431728  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:36.431764  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.431783  830558 retry.go:31] will retry after 39.090862924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.549963  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.550375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.050652  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.050724  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.050986  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.550839  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.551209  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.049961  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:38.050446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.256706  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:38.315606  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:38.315652  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.315671  830558 retry.go:31] will retry after 24.874249468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.550037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.550353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.050035  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.550165  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.550611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.050932  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.051412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.051484  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.550007  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.550092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.050151  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.050226  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.050542  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.549934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.550007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.050083  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.050160  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.550115  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.550557  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.550613  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.050266  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.050343  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.050403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.549913  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.050255  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.050774  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:45.050854  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.550027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.050187  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.050264  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.050652  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.550359  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.550435  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.550733  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.550791  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:48.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.050612  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.050950  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.550625  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.550703  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.551027  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.050305  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.050380  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.050293  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.050654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:50.050715  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.550658  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.550732  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.550987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.050776  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.051172  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.549919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.549999  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.550341  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.050371  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.550001  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.550075  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:53.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.050100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.550167  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.550226  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.550303  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.550719  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:55.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.050343  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.550553  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.550627  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.550930  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.050724  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.050807  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.550490  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.550765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.550815  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.050617  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.050698  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.051032  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.550880  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.551319  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.050503  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.050584  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.050859  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.550636  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.550712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:58.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.050796  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.051120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.550919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.551267  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.052318  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:33:00.550554  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.550633  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.550978  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.050351  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.050633  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.050680  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:01.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.050197  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.050277  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.050651  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.550347  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.050076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.190859  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:33:03.248648  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248694  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248794  830558 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:03.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.550454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.050739  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.050814  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.051133  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.550977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.551052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.551392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.050105  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.050184  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.050531  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.550528  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.550787  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.550829  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.050557  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.050961  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.550801  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.550879  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.049908  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.050285  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.550098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.050180  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.050261  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.050656  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.050717  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.549966  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.550358  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.550043  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.550501  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.050401  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.550597  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.550682  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.551012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.551066  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:11.050806  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.050883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.051219  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.550460  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.550568  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.550827  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.050716  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.550879  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.550959  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.551385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.551442  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:13.049924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.050301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.549989  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.550389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.050083  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.050417  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.550127  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.550484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.050632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:15.050702  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.522803  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:33:15.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.550344  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.583628  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587769  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587875  830558 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:15.590972  830558 out.go:179] * Enabled addons: 
	I1210 06:33:15.594685  830558 addons.go:530] duration metric: took 1m30.455573868s for enable addons: enabled=[]
	I1210 06:33:16.049998  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.050410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.050382  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.549964  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.550065  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:17.550413  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:18.050065  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.050504  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.550271  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.550617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.050795  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.050864  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.051173  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.550924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.551041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.551366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.551422  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:20.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.050041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.550354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.050040  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.050115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.049927  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.049998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:22.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:22.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.050681  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.549948  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.550276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:24.050460  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:24.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.550552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.550502  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.550576  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.550881  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.050647  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.050720  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.051065  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:26.051131  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:26.550815  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.550883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.551145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.049919  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.050002  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.050335  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.550459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.050846  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.051128  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:28.051173  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:28.550887  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.551314  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.050094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.050428  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.549962  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.550045  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.550327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.050611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.550611  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.550706  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.551062  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:30.551116  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:31.050373  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.050446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.050762  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.550642  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.550963  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.050761  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.050841  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.051145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.550438  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.550527  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.550836  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.050606  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.050687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.051001  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.051058  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:33.550797  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.550872  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.551204  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.050446  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.050542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.050806  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.550570  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.550651  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.551007  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.050684  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.050765  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.051121  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.051180  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:35.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.550049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.050068  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.050156  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.050551  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.550267  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.550341  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.050415  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.050506  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.050765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.550162  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:37.550551  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.050049  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.050196  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.050593  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.550283  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.550352  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.550637  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.550093  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.550174  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.550524  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:39.550606  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.050048  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.055554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:33:40.550566  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.550648  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.551812  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:33:41.050589  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.051002  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.550775  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.550850  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.551122  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:41.551174  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:42.050929  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.051003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.051301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.550943  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.551032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.551344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.049952  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.550011  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.550090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.050208  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.050291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.050657  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:44.050712  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:44.549928  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.550272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.050538  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.550260  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.550359  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.051019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.051104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.051470  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:46.051522  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:46.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.550441  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.050177  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.050256  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.050580  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.550565  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.550895  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.050718  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.050799  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.051139  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.550959  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.551034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.551396  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:48.551454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:49.049969  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.550097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.550429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.050016  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.050484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.550304  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.049996  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.050078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:51.050452  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:51.550024  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.550445  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.049971  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.050360  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.050013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.050485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:53.050541  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.050022  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.050106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.550641  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.050327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.550478  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.550556  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:55.550991  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:56.050594  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.050672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.550810  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.550888  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.551156  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.050906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.050979  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.051317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.049906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.049976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.050249  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.050294  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:58.549945  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.550024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.550385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.050095  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.050176  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.050522  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.550222  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.550309  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.050052  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.050455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:00.050684  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:00.549926  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.550006  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.550355  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.050662  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.050737  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.051064  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.550884  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.551306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.050041  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.550268  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.550561  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:02.550618  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.050297  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.050373  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.050719  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.049984  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.550075  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.550154  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.550510  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.050122  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.050591  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.050642  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.550387  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.550492  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.050542  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.050966  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.550619  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.551056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.050145  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.050214  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.050555  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.550443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.550518  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.050047  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.050151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.050544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.550038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.550495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:09.550556  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.050581  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.050987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.550954  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.050073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.550577  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.550654  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.550920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:11.550968  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:12.050759  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:12.549950  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.550032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.550372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.050891  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.051155  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.550910  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.550990  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.551324  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:13.551384  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:14.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.050372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:14.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.550132  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.550454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.050143  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.550590  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.550665  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.551006  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:16.050139  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.050219  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:16.050651  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:16.550343  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.550746  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.050583  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.050659  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.051004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.550305  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.550379  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.550661  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.050492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.550227  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.550654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:18.550708  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:19.049907  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.050300  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:19.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.050129  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.050682  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.550512  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.550605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.550929  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:20.550983  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:21.050722  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.050804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.051141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:21.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.551258  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.050508  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.050581  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.550614  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.550689  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.551037  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:22.551097  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:23.050847  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.050935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:23.549922  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.050066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.050419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.550230  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.550613  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:25.050894  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:25.051280  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:25.550372  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.550449  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.050683  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.050763  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.051110  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.550564  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.550636  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.550899  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.050671  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.050748  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.051102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.550781  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.550860  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.551195  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:27.551252  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:28.049904  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.049986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.050254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:28.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.050220  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.050298  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.050678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.549921  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.549996  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:30.050073  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:30.050563  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:30.550516  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.550620  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.050272  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.050339  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.050673  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:32.050170  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.050245  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:32.050647  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:32.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.550386  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.550677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.050375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.550519  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:34.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.050710  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.051024  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:34.051085  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:34.550840  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.050092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.550503  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.550574  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.550888  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:36.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.050822  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:36.051321  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:36.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.551056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.551466  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.050811  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.050107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.550034  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.550118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.550387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:38.550431  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:39.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:39.550021  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.050212  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.050299  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.050616  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.550800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.551131  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:40.551184  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:41.050959  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.051050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.051405  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:41.550069  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.550140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.050053  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.050128  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.550426  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:43.049964  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:43.050427  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:43.550060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.550432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.050174  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.050254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.050577  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.550265  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.550337  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:45.050106  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.051475  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:34:45.051555  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:45.550586  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.550670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.050308  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.050387  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.050713  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.550668  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.551031  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.050814  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.051189  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.550459  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.550844  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:47.550902  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:48.050660  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.050735  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.051052  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:48.550831  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.050342  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.050418  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.050723  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.550042  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.550119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.550450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.050296  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:50.050747  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:50.550446  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.550803  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.050575  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.050992  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.550764  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.550839  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.551183  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:52.050947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.051295  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:52.051339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:52.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.550102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.550487  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.050304  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.050648  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.550369  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.050479  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.550177  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.550254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:54.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:55.049960  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.050038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.050307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:55.550536  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.550618  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.550953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.050845  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.549892  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.549977  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.550245  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:57.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:57.050439  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:57.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.550412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.550038  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.550398  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:59.050037  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:59.050536  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:59.550090  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.550165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.550488  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.050082  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.050172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.050532  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.550871  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.551043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.551414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.050056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.550506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:01.550566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:02.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.050334  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.050718  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:02.549994  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.550338  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.550201  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.550618  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:03.550677  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:04.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.050326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:04.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.550073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.550366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.050435  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.550487  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:05.550797  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:06.050578  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.051028  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:06.550698  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.550789  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.551170  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.050527  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.050605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.550670  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.551130  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:07.551186  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:08.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.050023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.050388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:08.550709  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.550783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.551109  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.051017  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.051361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.550147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.550539  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:10.049990  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.050353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:10.050409  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:10.550333  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.550412  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.550769  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.050573  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.050649  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.050998  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.550348  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.550636  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:12.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:12.050544  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:12.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.550407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.050003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.050262  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.550020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.550364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.049948  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.550069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.550374  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:14.550430  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:15.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:15.550549  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.550643  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.550979  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.050330  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.050628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.550008  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.550088  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:16.550501  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:17.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.050312  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.050693  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:17.549908  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.549986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.550246  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.050001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.050297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.550063  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.550458  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:18.550526  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:19.050194  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.050560  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:19.550268  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.550350  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.050392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.050488  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.050847  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:20.551047  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:21.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.050894  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:21.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.551007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.551349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.050275  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:23.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.050584  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:23.050648  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:23.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.550376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.550716  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.050410  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.050504  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.050842  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.550612  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:25.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.050728  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.051015  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:25.051074  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:25.550298  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.550378  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.050574  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.050656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.051021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.550326  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.550392  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.550033  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.550485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:27.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:28.050833  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.051180  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:28.550989  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.551079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.551403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.050086  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.050165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.550827  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.551182  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:29.551227  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:30.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.050563  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:30.550365  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.550440  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.550785  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.050058  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.050147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.050461  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:32.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.050406  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:32.050456  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:32.549918  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.549989  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.550312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.050009  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.050085  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.550628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:34.050314  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.050390  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.050677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:34.050723  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:34.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.550685  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.051309  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.550125  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.550193  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.550209  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.550544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:36.550600  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:37.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.050376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:37.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.550172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.550549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.550505  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.550588  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.550849  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:38.550901  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:39.050640  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.050721  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.051071  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:39.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.550926  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.050625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.050933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.551010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.551608  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:40.551663  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:41.049981  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.050352  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:41.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.550361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.550359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:43.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.050413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:43.050491  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:43.550149  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.550232  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.550536  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.050209  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.050286  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.550377  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.550446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.550724  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:45.050153  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:45.050650  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:45.549952  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.550034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.550414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:46.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.050372  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.055238  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:35:46.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.550675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:47.050432  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.050548  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.050914  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:47.050975  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:47.550717  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.551174  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.049903  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.049980  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.050317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.550065  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.550558  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:49.050850  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.050920  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.051255  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:49.051361  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:49.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.050183  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.050684  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.550583  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.550655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.550936  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.050800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.051144  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.551356  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:51.551411  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:52.050680  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.050756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.051067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:52.550548  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.550625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.050711  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.051146  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.550886  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.551220  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:54.049953  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.050414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:54.050492  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:54.549996  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.051106  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.550346  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.550419  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.550782  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:56.050628  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.051118  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:56.051182  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:56.550940  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.551022  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.551289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.049999  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.550492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.550015  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:58.550507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:59.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.050407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:59.550696  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.550768  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.551102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.050842  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.050924  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.051234  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:01.049954  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.050035  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.050328  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:01.050375  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:01.549990  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.050117  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.050573  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.550013  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.550284  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:03.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.050086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:03.050527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:03.550184  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.550270  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.550632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.050901  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.050978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.051312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.550082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.550542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:05.550839  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:06.050579  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.051012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:06.550829  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.551240  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.050493  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.050573  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.550693  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.550778  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.551124  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:07.551183  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:08.050922  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.051004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.051346  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:08.549944  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.550015  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.550288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.550052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:10.050587  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.050953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:10.051003  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:10.550899  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.550976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.551312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.051047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.051365  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.550062  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.550380  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.050424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.550175  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.550251  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:12.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:13.049890  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.049962  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.050215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:13.549891  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.549970  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.550296  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.050411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.550126  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.550211  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.550507  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:15.050062  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:15.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:15.550556  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.550635  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.050861  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.051148  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.550930  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.551326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.550156  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.550229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.550520  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:17.550565  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:18.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:18.550193  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.049970  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.050368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.550014  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:20.050206  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.050292  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.050696  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:20.050759  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.550733  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.050835  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.050133  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.550116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:22.550527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:23.050001  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.050430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:23.549960  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.050045  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.550232  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.550319  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:24.550726  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:25.049975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.050347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:25.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.550531  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.550872  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.050576  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.050655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.051009  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.550798  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.551067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:26.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:27.050878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.050952  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.051289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:27.550017  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.049942  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.050024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.050288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:29.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.050234  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.050566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:29.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:29.550905  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.550972  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.050116  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.050204  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.550551  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.550956  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:31.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.050353  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.050643  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:31.050689  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:31.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.050146  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.050220  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.050568  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.550834  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.550909  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.551181  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.049926  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.050020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:33.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:34.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.050221  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:34.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.550113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.550403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.050133  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.550293  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.550366  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.550646  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:35.550688  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:36.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:36.550078  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.550152  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.550514  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.050153  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.550003  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.550086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:38.050242  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.050345  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.050820  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:38.050886  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:38.550627  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.550702  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.550965  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.050786  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.051199  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.550826  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.550908  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.551239  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.049947  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.050037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.050342  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.550382  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.550458  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.550826  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:40.550883  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:41.050667  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.050745  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.051117  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:41.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.550958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.050917  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.050997  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.051354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.550117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.550436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:43.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:43.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:43.549987  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.050905  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.051231  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.550482  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.550555  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.550855  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:45.050825  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.050916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.051222  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:45.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:45.550929  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.551345  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.049915  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.050010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.050329  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.549983  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.549925  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:47.550317  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:48.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.050095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:48.550037  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.050116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.050497  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.550104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.550496  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:49.550554  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:50.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.050125  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.050500  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:50.550519  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.550589  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.050803  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.550907  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.550985  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:51.551347  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:52.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.050305  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:52.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.550070  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.550649  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:54.050845  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.050929  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.051278  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:54.051340  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:54.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.550067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.550384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.050384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.550672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.550984  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.050875  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.050955  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.051282  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:56.550406  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:57.050072  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.050499  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:57.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.550054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.049963  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.550064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:58.550486  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:59.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.050244  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.050617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:59.550004  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.550332  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.050088  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.050180  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.050543  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.550848  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.550935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.551280  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:00.551339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:01.050564  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.050644  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.050904  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:01.550685  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.551120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.051039  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.051359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.550512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:03.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:03.050509  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:03.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.550095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.050664  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.050742  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.051055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.550863  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.551272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.049983  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.050389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.550411  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.550500  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.550764  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:05.550808  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:06.050441  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.050533  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.050866  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:06.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.551104  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.050870  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.050944  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.051251  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.550410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:08.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.050239  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.050601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:08.050664  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:08.549949  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.550357  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.550204  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.550291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.550711  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:10.050422  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.050521  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:10.050899  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:10.550710  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.550785  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.551141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.050942  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.051363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.050103  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.550253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.550680  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:12.550735  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:13.049956  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.050028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:13.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.550413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.050614  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.550902  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.551307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:14.551376  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:15.050054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.050140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.050549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:15.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.550756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.551093  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.050865  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.050946  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.051228  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.550004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.550336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:17.049921  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.050336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:17.050393  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:17.550046  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.550394  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.050366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.550054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.550489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:19.050104  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.050185  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.050515  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:19.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:19.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.550665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.050424  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.050518  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.050884  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.550762  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.551162  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:21.050936  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.051012  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.051344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:21.051398  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:21.550076  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.550149  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.550491  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.050770  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.051151  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.550952  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.551036  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.551372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.050110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.550623  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.551091  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:23.551140  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:24.050873  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.050957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.051303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:24.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.050726  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.050795  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.051103  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:26.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.050517  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:26.050574  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:26.550022  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.550096  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.550377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.050089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.550158  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.550236  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.550601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.550041  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.550120  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.550456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:28.550530  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:29.050006  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.050087  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.050404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:29.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.050099  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.050189  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.550673  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.551134  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:30.551190  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:31.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.050960  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.051274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:31.550000  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.550131  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.550681  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.550771  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.551081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:33.050860  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.050934  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:33.051305  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:33.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.550448  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.050046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.550051  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.550376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.550550  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.550892  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:35.550953  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:36.050690  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.050767  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.051081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:36.550920  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.551001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.551377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.050702  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.050783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.051058  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.550812  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.550889  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:37.551281  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:38.049987  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:38.550706  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.550780  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.551043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.050813  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.050899  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.051232  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.549927  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.550005  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.550337  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:40.050658  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.051035  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:40.051084  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:40.549980  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.550072  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.550505  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.050097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.550897  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:42.050745  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.050826  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:42.051228  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:42.550950  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.551348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.050646  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.050920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.550724  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.550804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.551126  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:44.050930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.051007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.051348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:44.051402  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:44.550445  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.550537  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.550795  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.050730  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.051044  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.550527  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.550601  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.550931  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:46.050582  830558 type.go:168] "Request Body" body=""
	I1210 06:37:46.050725  830558 node_ready.go:38] duration metric: took 6m0.000935284s for node "functional-534748" to be "Ready" ...
	I1210 06:37:46.053848  830558 out.go:203] 
	W1210 06:37:46.056787  830558 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:37:46.056817  830558 out.go:285] * 
	* 
	W1210 06:37:46.059108  830558 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:37:46.062914  830558 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-534748 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.448363074s for "functional-534748" cluster.
I1210 06:37:46.678841  786751 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (360.837369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdspecific-port1205426619/001:/mount-9p --alsologtostderr -v=1 --port 46464                       │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh            │ functional-634209 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh            │ functional-634209 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh -- ls -la /mount-9p                                                                                                               │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh sudo umount -f /mount-9p                                                                                                          │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount1 --alsologtostderr -v=1                                      │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount2 --alsologtostderr -v=1                                      │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount3 --alsologtostderr -v=1                                      │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh            │ functional-634209 ssh findmnt -T /mount1                                                                                                                │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh findmnt -T /mount2                                                                                                                │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh findmnt -T /mount3                                                                                                                │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ mount          │ -p functional-634209 --kill=true                                                                                                                        │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ update-context │ functional-634209 update-context --alsologtostderr -v=2                                                                                                 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ update-context │ functional-634209 update-context --alsologtostderr -v=2                                                                                                 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ update-context │ functional-634209 update-context --alsologtostderr -v=2                                                                                                 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls --format short --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls --format yaml --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh pgrep buildkitd                                                                                                                   │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ image          │ functional-634209 image ls --format json --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr                                                  │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls --format table --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls                                                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete         │ -p functional-634209                                                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start          │ -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ start          │ -p functional-534748 --alsologtostderr -v=8                                                                                                             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:31 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:31:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:31:40.279311  830558 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:31:40.279505  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279534  830558 out.go:374] Setting ErrFile to fd 2...
	I1210 06:31:40.279556  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279849  830558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:31:40.280242  830558 out.go:368] Setting JSON to false
	I1210 06:31:40.281164  830558 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18825,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:31:40.281259  830558 start.go:143] virtualization:  
	I1210 06:31:40.284710  830558 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:31:40.288411  830558 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:31:40.288473  830558 notify.go:221] Checking for updates...
	I1210 06:31:40.295121  830558 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:31:40.302607  830558 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:40.305522  830558 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:31:40.308355  830558 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:31:40.311698  830558 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:31:40.315095  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:40.315199  830558 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:31:40.353797  830558 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:31:40.353929  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.415859  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.405265704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.415979  830558 docker.go:319] overlay module found
	I1210 06:31:40.419085  830558 out.go:179] * Using the docker driver based on existing profile
	I1210 06:31:40.421970  830558 start.go:309] selected driver: docker
	I1210 06:31:40.421991  830558 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.422101  830558 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:31:40.422196  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.479216  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.46865578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.479663  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:40.479723  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:40.479768  830558 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.482983  830558 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:31:40.485814  830558 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:31:40.488782  830558 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:31:40.491625  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:40.491676  830558 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:31:40.491687  830558 cache.go:65] Caching tarball of preloaded images
	I1210 06:31:40.491736  830558 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:31:40.491792  830558 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:31:40.491804  830558 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:31:40.491917  830558 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:31:40.511808  830558 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:31:40.511830  830558 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:31:40.511847  830558 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:31:40.511881  830558 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:31:40.511943  830558 start.go:364] duration metric: took 39.41µs to acquireMachinesLock for "functional-534748"
	I1210 06:31:40.511975  830558 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:31:40.511985  830558 fix.go:54] fixHost starting: 
	I1210 06:31:40.512241  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:40.529256  830558 fix.go:112] recreateIfNeeded on functional-534748: state=Running err=<nil>
	W1210 06:31:40.529298  830558 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:31:40.532448  830558 out.go:252] * Updating the running docker "functional-534748" container ...
	I1210 06:31:40.532488  830558 machine.go:94] provisionDockerMachine start ...
	I1210 06:31:40.532584  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.550188  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.550543  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.550560  830558 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:31:40.681995  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.682020  830558 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:31:40.682096  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.699737  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.700054  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.700072  830558 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:31:40.843977  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.844083  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.862627  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.862951  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.862975  830558 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:31:40.999052  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:31:40.999087  830558 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:31:40.999116  830558 ubuntu.go:190] setting up certificates
	I1210 06:31:40.999127  830558 provision.go:84] configureAuth start
	I1210 06:31:40.999208  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.018099  830558 provision.go:143] copyHostCerts
	I1210 06:31:41.018148  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018188  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:31:41.018200  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018276  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:31:41.018376  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018397  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:31:41.018412  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018442  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:31:41.018539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018565  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:31:41.018570  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018598  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:31:41.018664  830558 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:31:41.416959  830558 provision.go:177] copyRemoteCerts
	I1210 06:31:41.417039  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:31:41.417085  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.434643  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.530263  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:31:41.530324  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:31:41.547539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:31:41.547601  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:31:41.565054  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:31:41.565115  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:31:41.582586  830558 provision.go:87] duration metric: took 583.43959ms to configureAuth
	I1210 06:31:41.582635  830558 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:31:41.582823  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:41.582837  830558 machine.go:97] duration metric: took 1.050342086s to provisionDockerMachine
	I1210 06:31:41.582845  830558 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:31:41.582857  830558 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:31:41.582912  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:31:41.582957  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.603404  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.698354  830558 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:31:41.701779  830558 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:31:41.701843  830558 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:31:41.701865  830558 command_runner.go:130] > VERSION_ID="12"
	I1210 06:31:41.701877  830558 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:31:41.701883  830558 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:31:41.701887  830558 command_runner.go:130] > ID=debian
	I1210 06:31:41.701891  830558 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:31:41.701896  830558 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:31:41.701906  830558 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:31:41.701968  830558 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:31:41.702000  830558 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:31:41.702014  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:31:41.702084  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:31:41.702172  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:31:41.702185  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /etc/ssl/certs/7867512.pem
	I1210 06:31:41.702261  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:31:41.702269  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> /etc/test/nested/copy/786751/hosts
	I1210 06:31:41.702315  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:31:41.709991  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:41.727898  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:31:41.745651  830558 start.go:296] duration metric: took 162.79042ms for postStartSetup
	I1210 06:31:41.745798  830558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:31:41.745866  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.763287  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.863262  830558 command_runner.go:130] > 19%
	I1210 06:31:41.863843  830558 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:31:41.868394  830558 command_runner.go:130] > 159G
	I1210 06:31:41.868719  830558 fix.go:56] duration metric: took 1.356728705s for fixHost
	I1210 06:31:41.868739  830558 start.go:83] releasing machines lock for "functional-534748", held for 1.35678464s
	I1210 06:31:41.868810  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.887031  830558 ssh_runner.go:195] Run: cat /version.json
	I1210 06:31:41.887084  830558 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:31:41.887092  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.887143  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.906606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.920523  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:42.095537  830558 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:31:42.095667  830558 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 06:31:42.095846  830558 ssh_runner.go:195] Run: systemctl --version
	I1210 06:31:42.103080  830558 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:31:42.103120  830558 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:31:42.103532  830558 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:31:42.109223  830558 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:31:42.109308  830558 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:31:42.109410  830558 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:31:42.119226  830558 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:31:42.119255  830558 start.go:496] detecting cgroup driver to use...
	I1210 06:31:42.119293  830558 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:31:42.119365  830558 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:31:42.140472  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:31:42.156795  830558 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:31:42.156872  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:31:42.175919  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:31:42.191679  830558 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:31:42.319538  830558 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:31:42.438460  830558 docker.go:234] disabling docker service ...
	I1210 06:31:42.438580  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:31:42.456224  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:31:42.471442  830558 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:31:42.599250  830558 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:31:42.716867  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:31:42.729172  830558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:31:42.742342  830558 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 06:31:42.743581  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:31:42.752861  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:31:42.762203  830558 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:31:42.762278  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:31:42.771751  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.780168  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:31:42.788652  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.797230  830558 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:31:42.805633  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:31:42.814368  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:31:42.823074  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:31:42.832256  830558 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:31:42.839109  830558 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:31:42.840076  830558 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:31:42.847676  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:42.968893  830558 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:31:43.099901  830558 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:31:43.099974  830558 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:31:43.103852  830558 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 06:31:43.103874  830558 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:31:43.103881  830558 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1210 06:31:43.103888  830558 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:43.103903  830558 command_runner.go:130] > Access: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103913  830558 command_runner.go:130] > Modify: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103919  830558 command_runner.go:130] > Change: 2025-12-10 06:31:43.062873060 +0000
	I1210 06:31:43.103925  830558 command_runner.go:130] >  Birth: -
	I1210 06:31:43.103951  830558 start.go:564] Will wait 60s for crictl version
	I1210 06:31:43.104009  830558 ssh_runner.go:195] Run: which crictl
	I1210 06:31:43.107381  830558 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:31:43.107477  830558 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:31:43.129358  830558 command_runner.go:130] > Version:  0.1.0
	I1210 06:31:43.129383  830558 command_runner.go:130] > RuntimeName:  containerd
	I1210 06:31:43.129392  830558 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 06:31:43.129396  830558 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:31:43.131610  830558 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:31:43.131682  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.151833  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.153818  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.172831  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.180465  830558 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:31:43.183314  830558 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:31:43.199081  830558 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:31:43.202971  830558 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:31:43.203147  830558 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:31:43.203272  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:43.203351  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.227955  830558 command_runner.go:130] > {
	I1210 06:31:43.227978  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.227982  830558 command_runner.go:130] >     {
	I1210 06:31:43.227991  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.227996  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228002  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.228005  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228009  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228020  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.228023  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228028  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.228032  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228036  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228040  830558 command_runner.go:130] >     },
	I1210 06:31:43.228044  830558 command_runner.go:130] >     {
	I1210 06:31:43.228052  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.228056  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228061  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.228066  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228082  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228094  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.228097  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228102  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.228108  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228112  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228117  830558 command_runner.go:130] >     },
	I1210 06:31:43.228121  830558 command_runner.go:130] >     {
	I1210 06:31:43.228128  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.228135  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228141  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.228153  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228160  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228168  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.228174  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228178  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.228182  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.228186  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228191  830558 command_runner.go:130] >     },
	I1210 06:31:43.228195  830558 command_runner.go:130] >     {
	I1210 06:31:43.228204  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.228208  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228215  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.228219  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228225  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228233  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.228239  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228243  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.228247  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228250  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228254  830558 command_runner.go:130] >       },
	I1210 06:31:43.228258  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228264  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228272  830558 command_runner.go:130] >     },
	I1210 06:31:43.228279  830558 command_runner.go:130] >     {
	I1210 06:31:43.228286  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.228290  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228295  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.228299  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228303  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228313  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.228317  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228321  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.228331  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228340  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228350  830558 command_runner.go:130] >       },
	I1210 06:31:43.228354  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228357  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228361  830558 command_runner.go:130] >     },
	I1210 06:31:43.228364  830558 command_runner.go:130] >     {
	I1210 06:31:43.228371  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.228384  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228390  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.228394  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228398  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228406  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.228412  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228416  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.228420  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228424  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228427  830558 command_runner.go:130] >       },
	I1210 06:31:43.228438  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228443  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228445  830558 command_runner.go:130] >     },
	I1210 06:31:43.228448  830558 command_runner.go:130] >     {
	I1210 06:31:43.228455  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.228463  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228471  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.228475  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228479  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228487  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.228493  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228497  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.228502  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228512  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228515  830558 command_runner.go:130] >     },
	I1210 06:31:43.228518  830558 command_runner.go:130] >     {
	I1210 06:31:43.228525  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.228530  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228538  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.228542  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228546  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228557  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.228566  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228573  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.228577  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228580  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228584  830558 command_runner.go:130] >       },
	I1210 06:31:43.228594  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228598  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228601  830558 command_runner.go:130] >     },
	I1210 06:31:43.228604  830558 command_runner.go:130] >     {
	I1210 06:31:43.228611  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.228617  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228621  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.228627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228631  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228641  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.228647  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228655  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.228659  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228669  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.228673  830558 command_runner.go:130] >       },
	I1210 06:31:43.228677  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228681  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.228686  830558 command_runner.go:130] >     }
	I1210 06:31:43.228689  830558 command_runner.go:130] >   ]
	I1210 06:31:43.228692  830558 command_runner.go:130] > }
	I1210 06:31:43.228843  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.228853  830558 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:31:43.228913  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.254390  830558 command_runner.go:130] > {
	I1210 06:31:43.254411  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.254415  830558 command_runner.go:130] >     {
	I1210 06:31:43.254424  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.254430  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254435  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.254440  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254444  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254453  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.254460  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254488  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.254495  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254499  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254508  830558 command_runner.go:130] >     },
	I1210 06:31:43.254512  830558 command_runner.go:130] >     {
	I1210 06:31:43.254527  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.254534  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254540  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.254543  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254547  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254556  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.254576  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254581  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.254585  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254589  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254600  830558 command_runner.go:130] >     },
	I1210 06:31:43.254603  830558 command_runner.go:130] >     {
	I1210 06:31:43.254609  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.254619  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254624  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.254627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254638  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254649  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.254661  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254665  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.254669  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.254673  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254677  830558 command_runner.go:130] >     },
	I1210 06:31:43.254680  830558 command_runner.go:130] >     {
	I1210 06:31:43.254694  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.254698  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254703  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.254706  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254710  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254721  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.254725  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254729  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.254735  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254739  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254744  830558 command_runner.go:130] >       },
	I1210 06:31:43.254749  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254753  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254765  830558 command_runner.go:130] >     },
	I1210 06:31:43.254768  830558 command_runner.go:130] >     {
	I1210 06:31:43.254779  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.254786  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254791  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.254795  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254798  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254806  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.254810  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254816  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.254820  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254831  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254835  830558 command_runner.go:130] >       },
	I1210 06:31:43.254843  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254850  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254853  830558 command_runner.go:130] >     },
	I1210 06:31:43.254860  830558 command_runner.go:130] >     {
	I1210 06:31:43.254867  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.254873  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254879  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.254882  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254886  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254894  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.254897  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254901  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.254907  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254911  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254916  830558 command_runner.go:130] >       },
	I1210 06:31:43.254920  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254926  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254929  830558 command_runner.go:130] >     },
	I1210 06:31:43.254932  830558 command_runner.go:130] >     {
	I1210 06:31:43.254939  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.254945  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254951  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.254958  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254962  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254970  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.254975  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254979  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.254982  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254987  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254992  830558 command_runner.go:130] >     },
	I1210 06:31:43.254995  830558 command_runner.go:130] >     {
	I1210 06:31:43.255004  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.255008  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255022  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.255026  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255030  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255038  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.255044  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255048  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.255051  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255055  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.255058  830558 command_runner.go:130] >       },
	I1210 06:31:43.255061  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255065  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.255069  830558 command_runner.go:130] >     },
	I1210 06:31:43.255072  830558 command_runner.go:130] >     {
	I1210 06:31:43.255081  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.255088  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255093  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.255098  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255102  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255109  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.255112  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255116  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.255122  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255129  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.255136  830558 command_runner.go:130] >       },
	I1210 06:31:43.255140  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255143  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.255147  830558 command_runner.go:130] >     }
	I1210 06:31:43.255150  830558 command_runner.go:130] >   ]
	I1210 06:31:43.255153  830558 command_runner.go:130] > }
	I1210 06:31:43.257476  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.257497  830558 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:31:43.257505  830558 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:31:43.257607  830558 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:31:43.257674  830558 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:31:43.280486  830558 command_runner.go:130] > {
	I1210 06:31:43.280508  830558 command_runner.go:130] >   "cniconfig": {
	I1210 06:31:43.280515  830558 command_runner.go:130] >     "Networks": [
	I1210 06:31:43.280519  830558 command_runner.go:130] >       {
	I1210 06:31:43.280525  830558 command_runner.go:130] >         "Config": {
	I1210 06:31:43.280531  830558 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 06:31:43.280536  830558 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 06:31:43.280541  830558 command_runner.go:130] >           "Plugins": [
	I1210 06:31:43.280545  830558 command_runner.go:130] >             {
	I1210 06:31:43.280549  830558 command_runner.go:130] >               "Network": {
	I1210 06:31:43.280553  830558 command_runner.go:130] >                 "ipam": {},
	I1210 06:31:43.280572  830558 command_runner.go:130] >                 "type": "loopback"
	I1210 06:31:43.280586  830558 command_runner.go:130] >               },
	I1210 06:31:43.280593  830558 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 06:31:43.280596  830558 command_runner.go:130] >             }
	I1210 06:31:43.280600  830558 command_runner.go:130] >           ],
	I1210 06:31:43.280614  830558 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 06:31:43.280625  830558 command_runner.go:130] >         },
	I1210 06:31:43.280630  830558 command_runner.go:130] >         "IFName": "lo"
	I1210 06:31:43.280633  830558 command_runner.go:130] >       }
	I1210 06:31:43.280637  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280642  830558 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 06:31:43.280652  830558 command_runner.go:130] >     "PluginDirs": [
	I1210 06:31:43.280656  830558 command_runner.go:130] >       "/opt/cni/bin"
	I1210 06:31:43.280660  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280671  830558 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 06:31:43.280679  830558 command_runner.go:130] >     "Prefix": "eth"
	I1210 06:31:43.280682  830558 command_runner.go:130] >   },
	I1210 06:31:43.280686  830558 command_runner.go:130] >   "config": {
	I1210 06:31:43.280693  830558 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 06:31:43.280699  830558 command_runner.go:130] >       "/etc/cdi",
	I1210 06:31:43.280705  830558 command_runner.go:130] >       "/var/run/cdi"
	I1210 06:31:43.280710  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280714  830558 command_runner.go:130] >     "cni": {
	I1210 06:31:43.280725  830558 command_runner.go:130] >       "binDir": "",
	I1210 06:31:43.280729  830558 command_runner.go:130] >       "binDirs": [
	I1210 06:31:43.280732  830558 command_runner.go:130] >         "/opt/cni/bin"
	I1210 06:31:43.280736  830558 command_runner.go:130] >       ],
	I1210 06:31:43.280740  830558 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 06:31:43.280744  830558 command_runner.go:130] >       "confTemplate": "",
	I1210 06:31:43.280747  830558 command_runner.go:130] >       "ipPref": "",
	I1210 06:31:43.280751  830558 command_runner.go:130] >       "maxConfNum": 1,
	I1210 06:31:43.280755  830558 command_runner.go:130] >       "setupSerially": false,
	I1210 06:31:43.280759  830558 command_runner.go:130] >       "useInternalLoopback": false
	I1210 06:31:43.280762  830558 command_runner.go:130] >     },
	I1210 06:31:43.280768  830558 command_runner.go:130] >     "containerd": {
	I1210 06:31:43.280772  830558 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 06:31:43.280776  830558 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 06:31:43.280781  830558 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 06:31:43.280789  830558 command_runner.go:130] >       "runtimes": {
	I1210 06:31:43.280793  830558 command_runner.go:130] >         "runc": {
	I1210 06:31:43.280797  830558 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 06:31:43.280802  830558 command_runner.go:130] >           "PodAnnotations": null,
	I1210 06:31:43.280806  830558 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 06:31:43.280811  830558 command_runner.go:130] >           "cgroupWritable": false,
	I1210 06:31:43.280814  830558 command_runner.go:130] >           "cniConfDir": "",
	I1210 06:31:43.280818  830558 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 06:31:43.280822  830558 command_runner.go:130] >           "io_type": "",
	I1210 06:31:43.280827  830558 command_runner.go:130] >           "options": {
	I1210 06:31:43.280838  830558 command_runner.go:130] >             "BinaryName": "",
	I1210 06:31:43.280850  830558 command_runner.go:130] >             "CriuImagePath": "",
	I1210 06:31:43.280854  830558 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 06:31:43.280858  830558 command_runner.go:130] >             "IoGid": 0,
	I1210 06:31:43.280862  830558 command_runner.go:130] >             "IoUid": 0,
	I1210 06:31:43.280866  830558 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 06:31:43.280872  830558 command_runner.go:130] >             "Root": "",
	I1210 06:31:43.280877  830558 command_runner.go:130] >             "ShimCgroup": "",
	I1210 06:31:43.280883  830558 command_runner.go:130] >             "SystemdCgroup": false
	I1210 06:31:43.280887  830558 command_runner.go:130] >           },
	I1210 06:31:43.280892  830558 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 06:31:43.280898  830558 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 06:31:43.280902  830558 command_runner.go:130] >           "runtimePath": "",
	I1210 06:31:43.280907  830558 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 06:31:43.280912  830558 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 06:31:43.280918  830558 command_runner.go:130] >           "snapshotter": ""
	I1210 06:31:43.280921  830558 command_runner.go:130] >         }
	I1210 06:31:43.280925  830558 command_runner.go:130] >       }
	I1210 06:31:43.280930  830558 command_runner.go:130] >     },
	I1210 06:31:43.280941  830558 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 06:31:43.280949  830558 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 06:31:43.280959  830558 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 06:31:43.280965  830558 command_runner.go:130] >     "disableApparmor": false,
	I1210 06:31:43.280970  830558 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 06:31:43.280976  830558 command_runner.go:130] >     "disableProcMount": false,
	I1210 06:31:43.280983  830558 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 06:31:43.280986  830558 command_runner.go:130] >     "enableCDI": true,
	I1210 06:31:43.280991  830558 command_runner.go:130] >     "enableSelinux": false,
	I1210 06:31:43.280995  830558 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 06:31:43.281002  830558 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 06:31:43.281009  830558 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 06:31:43.281014  830558 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 06:31:43.281021  830558 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 06:31:43.281029  830558 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 06:31:43.281034  830558 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 06:31:43.281040  830558 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281047  830558 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 06:31:43.281052  830558 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281057  830558 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 06:31:43.281062  830558 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 06:31:43.281067  830558 command_runner.go:130] >   },
	I1210 06:31:43.281071  830558 command_runner.go:130] >   "features": {
	I1210 06:31:43.281076  830558 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 06:31:43.281079  830558 command_runner.go:130] >   },
	I1210 06:31:43.281083  830558 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 06:31:43.281095  830558 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281107  830558 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281111  830558 command_runner.go:130] >   "runtimeHandlers": [
	I1210 06:31:43.281114  830558 command_runner.go:130] >     {
	I1210 06:31:43.281118  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281129  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281134  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281137  830558 command_runner.go:130] >       }
	I1210 06:31:43.281142  830558 command_runner.go:130] >     },
	I1210 06:31:43.281145  830558 command_runner.go:130] >     {
	I1210 06:31:43.281148  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281153  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281158  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281161  830558 command_runner.go:130] >       },
	I1210 06:31:43.281168  830558 command_runner.go:130] >       "name": "runc"
	I1210 06:31:43.281171  830558 command_runner.go:130] >     }
	I1210 06:31:43.281174  830558 command_runner.go:130] >   ],
	I1210 06:31:43.281178  830558 command_runner.go:130] >   "status": {
	I1210 06:31:43.281183  830558 command_runner.go:130] >     "conditions": [
	I1210 06:31:43.281186  830558 command_runner.go:130] >       {
	I1210 06:31:43.281190  830558 command_runner.go:130] >         "message": "",
	I1210 06:31:43.281205  830558 command_runner.go:130] >         "reason": "",
	I1210 06:31:43.281209  830558 command_runner.go:130] >         "status": true,
	I1210 06:31:43.281214  830558 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 06:31:43.281220  830558 command_runner.go:130] >       },
	I1210 06:31:43.281224  830558 command_runner.go:130] >       {
	I1210 06:31:43.281230  830558 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 06:31:43.281235  830558 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 06:31:43.281239  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281243  830558 command_runner.go:130] >         "type": "NetworkReady"
	I1210 06:31:43.281246  830558 command_runner.go:130] >       },
	I1210 06:31:43.281249  830558 command_runner.go:130] >       {
	I1210 06:31:43.281271  830558 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 06:31:43.281280  830558 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 06:31:43.281286  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281292  830558 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 06:31:43.281298  830558 command_runner.go:130] >       }
	I1210 06:31:43.281301  830558 command_runner.go:130] >     ]
	I1210 06:31:43.281304  830558 command_runner.go:130] >   }
	I1210 06:31:43.281308  830558 command_runner.go:130] > }
	I1210 06:31:43.283879  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:43.283902  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:43.283924  830558 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:31:43.283950  830558 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:31:43.284076  830558 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:31:43.284154  830558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:31:43.290942  830558 command_runner.go:130] > kubeadm
	I1210 06:31:43.290962  830558 command_runner.go:130] > kubectl
	I1210 06:31:43.290967  830558 command_runner.go:130] > kubelet
	I1210 06:31:43.291913  830558 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:31:43.292013  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:31:43.299680  830558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:31:43.314082  830558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:31:43.330260  830558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 06:31:43.347625  830558 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:31:43.352127  830558 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:31:43.352925  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:43.471703  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:44.297320  830558 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:31:44.297353  830558 certs.go:195] generating shared ca certs ...
	I1210 06:31:44.297370  830558 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:44.297565  830558 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:31:44.297620  830558 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:31:44.297640  830558 certs.go:257] generating profile certs ...
	I1210 06:31:44.297767  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:31:44.297844  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:31:44.297905  830558 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:31:44.297923  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:31:44.297952  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:31:44.297969  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:31:44.297986  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:31:44.297997  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:31:44.298022  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:31:44.298036  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:31:44.298051  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:31:44.298107  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:31:44.298147  830558 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:31:44.298160  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:31:44.298194  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:31:44.298223  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:31:44.298262  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:31:44.298323  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:44.298363  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem -> /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.298380  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.298399  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.299062  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:31:44.319985  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:31:44.339121  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:31:44.360050  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:31:44.381013  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:31:44.398560  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:31:44.416157  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:31:44.433967  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:31:44.452197  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:31:44.470088  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:31:44.487844  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:31:44.505551  830558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:31:44.518440  830558 ssh_runner.go:195] Run: openssl version
	I1210 06:31:44.524638  830558 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:31:44.525053  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.532466  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:31:44.539857  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543663  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543696  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543746  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.585800  830558 command_runner.go:130] > 51391683
	I1210 06:31:44.586242  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:31:44.594754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.602172  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:31:44.609494  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613294  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613412  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613500  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.654003  830558 command_runner.go:130] > 3ec20f2e
	I1210 06:31:44.654513  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:31:44.661754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.668842  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:31:44.676441  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680175  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680286  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680373  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.725770  830558 command_runner.go:130] > b5213941
	I1210 06:31:44.726319  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:31:44.734095  830558 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737911  830558 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737986  830558 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:31:44.737999  830558 command_runner.go:130] > Device: 259,1	Inode: 1050653     Links: 1
	I1210 06:31:44.738007  830558 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:44.738013  830558 command_runner.go:130] > Access: 2025-12-10 06:27:36.644508596 +0000
	I1210 06:31:44.738018  830558 command_runner.go:130] > Modify: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738023  830558 command_runner.go:130] > Change: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738028  830558 command_runner.go:130] >  Birth: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738118  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:31:44.779233  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.779410  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:31:44.820004  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.820457  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:31:44.860741  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.861258  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:31:44.902039  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.902514  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:31:44.943742  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.944234  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:31:44.986027  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.986500  830558 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:44.986586  830558 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:31:44.986679  830558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:31:45.063121  830558 cri.go:89] found id: ""
	I1210 06:31:45.063216  830558 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:31:45.099783  830558 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:31:45.099866  830558 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:31:45.099891  830558 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:31:45.101399  830558 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:31:45.101477  830558 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:31:45.101575  830558 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:31:45.115892  830558 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:31:45.116487  830558 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-534748" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.116718  830558 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "functional-534748" cluster setting kubeconfig missing "functional-534748" context setting]
	I1210 06:31:45.117177  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.117949  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.118213  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.118984  830558 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:31:45.119085  830558 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:31:45.119134  830558 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:31:45.119161  830558 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:31:45.119217  830558 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:31:45.119055  830558 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:31:45.119702  830558 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:31:45.137495  830558 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:31:45.137534  830558 kubeadm.go:602] duration metric: took 36.034287ms to restartPrimaryControlPlane
	I1210 06:31:45.137546  830558 kubeadm.go:403] duration metric: took 151.054854ms to StartCluster
	I1210 06:31:45.137576  830558 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.137653  830558 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.138311  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.138643  830558 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:31:45.139043  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:45.139108  830558 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:31:45.139177  830558 addons.go:70] Setting storage-provisioner=true in profile "functional-534748"
	I1210 06:31:45.139193  830558 addons.go:239] Setting addon storage-provisioner=true in "functional-534748"
	I1210 06:31:45.139221  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.139239  830558 addons.go:70] Setting default-storageclass=true in profile "functional-534748"
	I1210 06:31:45.139259  830558 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-534748"
	I1210 06:31:45.139583  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.139701  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.145574  830558 out.go:179] * Verifying Kubernetes components...
	I1210 06:31:45.148690  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:45.190248  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.190435  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.190756  830558 addons.go:239] Setting addon default-storageclass=true in "functional-534748"
	I1210 06:31:45.190791  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.192137  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.207281  830558 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:31:45.210256  830558 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.210285  830558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:31:45.210364  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.229978  830558 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:45.230080  830558 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:31:45.230235  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.286606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.319378  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.390267  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:45.420552  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.445487  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.049742  830558 node_ready.go:35] waiting up to 6m0s for node "functional-534748" to be "Ready" ...
	I1210 06:31:46.049893  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.049953  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.050234  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050272  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050293  830558 retry.go:31] will retry after 223.621304ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050345  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050359  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050366  830558 retry.go:31] will retry after 336.04204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050483  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.274791  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.331904  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.335903  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.335940  830558 retry.go:31] will retry after 342.637774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.387178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.449259  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.449297  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.449332  830558 retry.go:31] will retry after 384.971387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.550591  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.550669  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.551072  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.679392  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.735005  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.738824  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.738907  830558 retry.go:31] will retry after 477.156435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.835016  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.898535  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.902447  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.902505  830558 retry.go:31] will retry after 587.076477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.050787  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.051147  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.216664  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:47.275932  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.275982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.276003  830558 retry.go:31] will retry after 1.079016213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.490360  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:47.550012  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.551946  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.551982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.552018  830558 retry.go:31] will retry after 1.089774327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.050900  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.051018  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.051381  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.051446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.355639  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:48.413382  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.416787  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.416855  830558 retry.go:31] will retry after 1.248652089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.642762  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:48.712914  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.712955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.712975  830558 retry.go:31] will retry after 929.620731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.050356  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.050675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.550083  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.643743  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:49.666178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:49.715961  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.724279  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.724309  830558 retry.go:31] will retry after 2.037720794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735770  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.735805  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735824  830558 retry.go:31] will retry after 1.943919735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:50.050051  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.050130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.550100  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.550171  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.050020  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.050456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.550105  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.550181  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.550525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.680862  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:51.745585  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.745620  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.745639  830558 retry.go:31] will retry after 2.112684099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.762814  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:51.821569  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.825567  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.825603  830558 retry.go:31] will retry after 2.699110245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:52.050957  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.051054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.051439  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.550045  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.050176  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.050253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.050635  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:53.050697  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.550816  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.551250  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.858630  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:53.918073  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:53.921869  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:53.921905  830558 retry.go:31] will retry after 2.635687612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.525086  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:54.550579  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.550656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.550932  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.585338  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:54.588955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.588990  830558 retry.go:31] will retry after 2.164216453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:55.050098  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.551055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.551113  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:56.050733  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.050815  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.051188  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.549910  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.550302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.558696  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:56.634154  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.634201  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.634222  830558 retry.go:31] will retry after 5.842380515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.753466  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:56.822332  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.822371  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.822391  830558 retry.go:31] will retry after 4.388036914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:57.050861  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.050942  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.051261  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.550079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.049946  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.050302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:58.050362  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:58.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.550513  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.050184  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.050262  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.050626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.550077  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:00.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.550903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.551281  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.050843  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.051196  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.210631  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:01.270135  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:01.273736  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.273765  830558 retry.go:31] will retry after 7.330909522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.550049  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.050246  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.050347  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.050768  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.477366  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:02.540275  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:02.540316  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.540336  830558 retry.go:31] will retry after 13.941322707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.050685  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.050764  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.051097  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.550804  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.550886  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.551211  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.051225  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:04.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.050150  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.050229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.050552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.550574  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.550641  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.550922  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.050749  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.050829  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.051208  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:06.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:06.549940  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.050725  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.050985  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.550782  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.550862  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.551221  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.051376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:08.051435  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:08.550082  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.605823  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:08.661807  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:08.666022  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:08.666054  830558 retry.go:31] will retry after 18.459732711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:09.050632  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.050712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.051043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.550857  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.050543  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.050622  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.050913  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.550123  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.550201  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.550566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:10.550627  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:11.050158  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.050241  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.550370  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.050064  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.550550  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.050834  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.050904  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:13.051271  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:13.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.050138  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.050575  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.550278  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.550375  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.550721  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.050080  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.050169  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.050590  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.550609  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.550687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.551021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:15.551080  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:16.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.050991  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.482787  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:16.542663  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:16.546278  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.546307  830558 retry.go:31] will retry after 7.242230365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.550430  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.550511  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.550807  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.051138  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.550461  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.550553  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.550825  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.050619  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.050699  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.051034  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:18.051091  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:18.550728  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.550817  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.551143  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.050890  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.050958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.051259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.550945  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.551021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.551375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.050449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.549971  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.550340  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:20.550389  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:21.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.550111  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.550187  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.050899  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.050974  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.051306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.550116  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.550195  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.550553  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:22.550614  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:23.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.050118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.050459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.550009  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.550297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.788809  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:23.847955  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:23.851833  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:23.851867  830558 retry.go:31] will retry after 12.516286884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:24.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.050142  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.550248  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.550322  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.550678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.550736  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:25.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.050546  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.550682  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.550758  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.551068  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.050934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.051011  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.051351  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.549946  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.050019  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.050429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:27.050507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:27.126908  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:27.191358  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:27.191398  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.191417  830558 retry.go:31] will retry after 11.065094951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.550078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.050147  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.050242  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.550207  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.550541  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.050535  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:29.050590  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.550933  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.551212  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.550493  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.050667  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.050939  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.050993  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.550742  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.550827  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.551169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.050826  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.050910  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.051237  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.549938  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.550264  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.050091  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.550173  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.550258  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.550581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.550638  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:34.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.050330  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.550070  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.550540  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.050253  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.050340  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.550817  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.551259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.551320  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.049997  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.050415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.369119  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:36.431728  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:36.431764  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.431783  830558 retry.go:31] will retry after 39.090862924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.549963  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.550375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.050652  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.050724  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.050986  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.550839  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.551209  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.049961  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:38.050446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.256706  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:38.315606  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:38.315652  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.315671  830558 retry.go:31] will retry after 24.874249468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.550037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.550353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.050035  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.550165  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.550611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.050932  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.051412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.051484  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.550007  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.550092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.050151  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.050226  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.050542  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.549934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.550007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.050083  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.050160  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.550115  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.550557  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.550613  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.050266  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.050343  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.050403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.549913  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.050255  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.050774  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:45.050854  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.550027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.050187  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.050264  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.050652  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.550359  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.550435  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.550733  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.550791  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:48.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.050612  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.050950  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.550625  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.550703  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.551027  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.050305  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.050380  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.050293  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.050654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:50.050715  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.550658  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.550732  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.550987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.050776  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.051172  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.549919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.549999  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.550341  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.050371  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.550001  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.550075  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:53.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.050100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.550167  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.550226  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.550303  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.550719  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:55.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.050343  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.550553  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.550627  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.550930  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.050724  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.050807  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.550490  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.550765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.550815  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.050617  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.050698  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.051032  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.550880  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.551319  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.050503  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.050584  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.050859  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.550636  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.550712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:58.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.050796  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.051120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.550919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.551267  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.052318  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:33:00.550554  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.550633  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.550978  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.050351  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.050633  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.050680  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:01.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.050197  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.050277  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.050651  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.550347  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.050076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.190859  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:33:03.248648  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248694  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248794  830558 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:03.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.550454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.050739  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.050814  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.051133  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.550977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.551052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.551392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.050105  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.050184  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.050531  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.550528  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.550787  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.550829  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.050557  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.050961  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.550801  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.550879  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.049908  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.050285  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.550098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.050180  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.050261  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.050656  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.050717  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.549966  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.550358  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.550043  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.550501  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.050401  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.550597  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.550682  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.551012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.551066  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:11.050806  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.050883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.051219  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.550460  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.550568  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.550827  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.050716  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.550879  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.550959  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.551385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.551442  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:13.049924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.050301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.549989  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.550389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.050083  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.050417  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.550127  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.550484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.050632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:15.050702  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.522803  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:33:15.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.550344  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.583628  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587769  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587875  830558 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:15.590972  830558 out.go:179] * Enabled addons: 
	I1210 06:33:15.594685  830558 addons.go:530] duration metric: took 1m30.455573868s for enable addons: enabled=[]
	I1210 06:33:16.049998  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.050410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.050382  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.549964  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.550065  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:17.550413  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:18.050065  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.050504  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.550271  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.550617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.050795  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.050864  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.051173  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.550924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.551041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.551366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.551422  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:20.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.050041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.550354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.050040  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.050115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.049927  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.049998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:22.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:22.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.050681  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.549948  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.550276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:24.050460  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:24.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.550552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.550502  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.550576  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.550881  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.050647  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.050720  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.051065  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:26.051131  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:26.550815  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.550883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.551145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.049919  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.050002  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.050335  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.550459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.050846  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.051128  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:28.051173  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:28.550887  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.551314  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.050094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.050428  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.549962  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.550045  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.550327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.050611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.550611  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.550706  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.551062  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:30.551116  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:31.050373  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.050446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.050762  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.550642  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.550963  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.050761  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.050841  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.051145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.550438  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.550527  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.550836  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.050606  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.050687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.051001  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.051058  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:33.550797  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.550872  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.551204  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.050446  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.050542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.050806  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.550570  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.550651  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.551007  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.050684  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.050765  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.051121  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.051180  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:35.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.550049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.050068  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.050156  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.050551  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.550267  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.550341  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.050415  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.050506  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.050765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.550162  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:37.550551  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.050049  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.050196  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.050593  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.550283  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.550352  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.550637  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.550093  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.550174  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.550524  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:39.550606  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.050048  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.055554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:33:40.550566  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.550648  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.551812  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:33:41.050589  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.051002  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.550775  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.550850  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.551122  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:41.551174  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:42.050929  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.051003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.051301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.550943  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.551032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.551344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.049952  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.550011  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.550090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.050208  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.050291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.050657  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:44.050712  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:44.549928  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.550272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.050538  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.550260  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.550359  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.051019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.051104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.051470  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:46.051522  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:46.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.550441  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.050177  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.050256  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.050580  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.550565  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.550895  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.050718  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.050799  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.051139  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.550959  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.551034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.551396  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:48.551454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:49.049969  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.550097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.550429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.050016  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.050484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.550304  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.049996  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.050078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:51.050452  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:51.550024  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.550445  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.049971  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.050360  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.050013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.050485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:53.050541  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.050022  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.050106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.550641  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.050327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.550478  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.550556  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:55.550991  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:56.050594  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.050672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.550810  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.550888  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.551156  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.050906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.050979  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.051317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.049906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.049976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.050249  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.050294  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:58.549945  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.550024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.550385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.050095  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.050176  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.050522  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.550222  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.550309  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.050052  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.050455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:00.050684  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:00.549926  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.550006  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.550355  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.050662  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.050737  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.051064  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.550884  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.551306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.050041  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.550268  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.550561  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:02.550618  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.050297  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.050373  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.050719  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.049984  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.550075  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.550154  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.550510  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.050122  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.050591  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.050642  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.550387  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.550492  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.050542  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.050966  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.550619  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.551056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.050145  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.050214  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.050555  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.550443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.550518  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.050047  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.050151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.050544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.550038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.550495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:09.550556  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.050581  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.050987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.550954  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.050073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.550577  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.550654  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.550920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:11.550968  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:12.050759  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:12.549950  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.550032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.550372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.050891  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.051155  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.550910  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.550990  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.551324  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:13.551384  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:14.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.050372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:14.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.550132  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.550454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.050143  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.550590  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.550665  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.551006  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:16.050139  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.050219  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:16.050651  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:16.550343  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.550746  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.050583  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.050659  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.051004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.550305  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.550379  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.550661  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.050492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.550227  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.550654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:18.550708  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:19.049907  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.050300  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:19.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.050129  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.050682  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.550512  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.550605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.550929  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:20.550983  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:21.050722  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.050804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.051141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:21.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.551258  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.050508  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.050581  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.550614  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.550689  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.551037  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:22.551097  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:23.050847  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.050935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:23.549922  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.050066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.050419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.550230  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.550613  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:25.050894  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:25.051280  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:25.550372  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.550449  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.050683  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.050763  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.051110  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.550564  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.550636  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.550899  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.050671  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.050748  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.051102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.550781  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.550860  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.551195  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:27.551252  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:28.049904  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.049986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.050254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:28.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.050220  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.050298  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.050678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.549921  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.549996  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:30.050073  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:30.050563  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:30.550516  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.550620  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.050272  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.050339  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.050673  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:32.050170  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.050245  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:32.050647  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:32.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.550386  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.550677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.050375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.550519  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:34.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.050710  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.051024  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:34.051085  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:34.550840  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.050092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.550503  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.550574  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.550888  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:36.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.050822  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:36.051321  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:36.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.551056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.551466  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.050811  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.050107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.550034  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.550118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.550387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:38.550431  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:39.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:39.550021  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.050212  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.050299  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.050616  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.550800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.551131  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:40.551184  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:41.050959  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.051050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.051405  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:41.550069  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.550140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.050053  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.050128  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.550426  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:43.049964  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:43.050427  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:43.550060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.550432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.050174  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.050254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.050577  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.550265  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.550337  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:45.050106  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.051475  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:34:45.051555  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:45.550586  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.550670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.050308  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.050387  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.050713  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.550668  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.551031  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.050814  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.051189  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.550459  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.550844  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:47.550902  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:48.050660  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.050735  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.051052  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:48.550831  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.050342  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.050418  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.050723  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.550042  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.550119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.550450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.050296  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:50.050747  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:50.550446  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.550803  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.050575  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.050992  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.550764  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.550839  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.551183  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:52.050947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.051295  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:52.051339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:52.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.550102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.550487  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.050304  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.050648  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.550369  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.050479  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.550177  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.550254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:54.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:55.049960  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.050038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.050307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:55.550536  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.550618  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.550953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.050845  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.549892  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.549977  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.550245  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:57.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:57.050439  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:57.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.550412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.550038  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.550398  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:59.050037  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:59.050536  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:59.550090  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.550165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.550488  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.050082  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.050172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.050532  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.550871  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.551043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.551414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.050056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.550506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:01.550566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:02.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.050334  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.050718  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:02.549994  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.550338  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.550201  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.550618  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:03.550677  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:04.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.050326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:04.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.550073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.550366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.050435  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.550487  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:05.550797  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:06.050578  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.051028  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:06.550698  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.550789  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.551170  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.050527  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.050605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.550670  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.551130  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:07.551186  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:08.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.050023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.050388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:08.550709  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.550783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.551109  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.051017  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.051361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.550147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.550539  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:10.049990  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.050353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:10.050409  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:10.550333  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.550412  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.550769  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.050573  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.050649  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.050998  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.550348  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.550636  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:12.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:12.050544  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:12.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.550407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.050003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.050262  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.550020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.550364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.049948  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.550069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.550374  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:14.550430  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:15.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:15.550549  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.550643  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.550979  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.050330  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.050628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.550008  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.550088  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:16.550501  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:17.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.050312  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.050693  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:17.549908  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.549986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.550246  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.050001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.050297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.550063  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.550458  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:18.550526  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:19.050194  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.050560  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:19.550268  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.550350  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.050392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.050488  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.050847  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:20.551047  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:21.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.050894  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:21.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.551007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.551349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.050275  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:23.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.050584  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:23.050648  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:23.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.550376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.550716  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.050410  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.050504  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.050842  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.550612  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:25.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.050728  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.051015  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:25.051074  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:25.550298  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.550378  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.050574  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.050656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.051021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.550326  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.550392  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.550033  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.550485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:27.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:28.050833  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.051180  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:28.550989  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.551079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.551403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.050086  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.050165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.550827  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.551182  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:29.551227  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:30.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.050563  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:30.550365  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.550440  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.550785  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.050058  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.050147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.050461  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:32.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.050406  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:32.050456  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:32.549918  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.549989  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.550312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.050009  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.050085  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.550628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:34.050314  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.050390  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.050677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:34.050723  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:34.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.550685  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.051309  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.550125  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.550193  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.550209  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.550544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:36.550600  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:37.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.050376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:37.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.550172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.550549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.550505  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.550588  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.550849  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:38.550901  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:39.050640  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.050721  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.051071  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:39.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.550926  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.050625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.050933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.551010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.551608  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:40.551663  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:41.049981  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.050352  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:41.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.550361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.550359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:43.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.050413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:43.050491  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:43.550149  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.550232  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.550536  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.050209  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.050286  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.550377  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.550446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.550724  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:45.050153  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:45.050650  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:45.549952  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.550034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.550414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:46.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.050372  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.055238  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:35:46.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.550675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:47.050432  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.050548  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.050914  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:47.050975  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:47.550717  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.551174  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.049903  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.049980  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.050317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.550065  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.550558  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:49.050850  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.050920  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.051255  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:49.051361  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:49.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.050183  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.050684  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.550583  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.550655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.550936  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.050800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.051144  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.551356  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:51.551411  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:52.050680  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.050756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.051067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:52.550548  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.550625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.050711  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.051146  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.550886  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.551220  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:54.049953  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.050414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:54.050492  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:54.549996  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.051106  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.550346  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.550419  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.550782  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:56.050628  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.051118  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:56.051182  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:56.550940  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.551022  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.551289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.049999  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.550492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.550015  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:58.550507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:59.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.050407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:59.550696  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.550768  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.551102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.050842  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.050924  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.051234  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:01.049954  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.050035  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.050328  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:01.050375  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:01.549990  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.050117  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.050573  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.550013  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.550284  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:03.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.050086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:03.050527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:03.550184  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.550270  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.550632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.050901  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.050978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.051312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.550082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.550542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:05.550839  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:06.050579  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.051012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:06.550829  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.551240  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.050493  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.050573  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.550693  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.550778  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.551124  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:07.551183  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:08.050922  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.051004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.051346  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:08.549944  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.550015  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.550288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.550052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:10.050587  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.050953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:10.051003  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:10.550899  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.550976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.551312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.051047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.051365  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.550062  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.550380  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.050424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.550175  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.550251  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:12.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:13.049890  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.049962  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.050215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:13.549891  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.549970  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.550296  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.050411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.550126  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.550211  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.550507  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:15.050062  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:15.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:15.550556  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.550635  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.050861  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.051148  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.550930  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.551326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.550156  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.550229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.550520  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:17.550565  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:18.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:18.550193  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.049970  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.050368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.550014  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:20.050206  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.050292  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.050696  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:20.050759  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.550733  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.050835  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.050133  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.550116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:22.550527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:23.050001  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.050430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:23.549960  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.050045  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.550232  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.550319  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:24.550726  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:25.049975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.050347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:25.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.550531  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.550872  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.050576  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.050655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.051009  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.550798  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.551067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:26.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:27.050878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.050952  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.051289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:27.550017  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.049942  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.050024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.050288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:29.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.050234  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.050566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:29.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:29.550905  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.550972  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.050116  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.050204  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.550551  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.550956  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:31.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.050353  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.050643  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:31.050689  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:31.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.050146  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.050220  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.050568  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.550834  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.550909  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.551181  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.049926  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.050020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:33.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:34.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.050221  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:34.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.550113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.550403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.050133  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.550293  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.550366  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.550646  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:35.550688  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:36.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:36.550078  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.550152  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.550514  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.050153  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.550003  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.550086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:38.050242  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.050345  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.050820  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:38.050886  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:38.550627  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.550702  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.550965  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.050786  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.051199  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.550826  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.550908  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.551239  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.049947  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.050037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.050342  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.550382  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.550458  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.550826  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:40.550883  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:41.050667  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.050745  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.051117  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:41.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.550958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.050917  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.050997  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.051354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.550117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.550436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:43.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:43.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:43.549987  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.050905  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.051231  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.550482  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.550555  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.550855  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:45.050825  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.050916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.051222  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:45.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:45.550929  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.551345  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.049915  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.050010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.050329  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.549983  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.549925  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:47.550317  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:48.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.050095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:48.550037  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.050116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.050497  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.550104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.550496  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:49.550554  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:50.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.050125  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.050500  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:50.550519  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.550589  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.050803  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.550907  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.550985  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:51.551347  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:52.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.050305  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:52.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.550070  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.550649  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:54.050845  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.050929  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.051278  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:54.051340  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:54.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.550067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.550384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.050384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.550672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.550984  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.050875  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.050955  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.051282  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:56.550406  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:57.050072  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.050499  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:57.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.550054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.049963  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.550064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:58.550486  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:59.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.050244  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.050617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:59.550004  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.550332  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.050088  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.050180  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.050543  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.550848  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.550935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.551280  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:00.551339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:01.050564  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.050644  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.050904  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:01.550685  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.551120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.051039  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.051359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.550512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:03.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:03.050509  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:03.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.550095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.050664  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.050742  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.051055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.550863  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.551272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.049983  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.050389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.550411  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.550500  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.550764  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:05.550808  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:06.050441  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.050533  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.050866  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:06.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.551104  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.050870  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.050944  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.051251  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.550410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:08.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.050239  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.050601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:08.050664  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:08.549949  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.550357  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.550204  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.550291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.550711  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:10.050422  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.050521  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:10.050899  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:10.550710  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.550785  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.551141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.050942  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.051363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.050103  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.550253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.550680  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:12.550735  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:13.049956  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.050028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:13.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.550413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.050614  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.550902  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.551307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:14.551376  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:15.050054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.050140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.050549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:15.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.550756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.551093  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.050865  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.050946  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.051228  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.550004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.550336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:17.049921  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.050336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:17.050393  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:17.550046  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.550394  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.050366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.550054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.550489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:19.050104  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.050185  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.050515  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:19.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:19.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.550665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.050424  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.050518  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.050884  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.550762  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.551162  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:21.050936  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.051012  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.051344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:21.051398  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:21.550076  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.550149  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.550491  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.050770  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.051151  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.550952  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.551036  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.551372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.050110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.550623  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.551091  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:23.551140  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:24.050873  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.050957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.051303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:24.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.050726  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.050795  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.051103  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:26.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.050517  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:26.050574  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:26.550022  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.550096  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.550377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.050089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.550158  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.550236  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.550601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.550041  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.550120  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.550456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:28.550530  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:29.050006  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.050087  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.050404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:29.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.050099  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.050189  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.550673  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.551134  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:30.551190  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:31.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.050960  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.051274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:31.550000  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.550131  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.550681  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.550771  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.551081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:33.050860  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.050934  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:33.051305  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:33.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.550448  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.050046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.550051  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.550376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.550550  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.550892  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:35.550953  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:36.050690  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.050767  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.051081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:36.550920  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.551001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.551377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.050702  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.050783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.051058  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.550812  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.550889  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:37.551281  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:38.049987  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:38.550706  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.550780  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.551043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.050813  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.050899  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.051232  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.549927  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.550005  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.550337  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:40.050658  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.051035  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:40.051084  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:40.549980  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.550072  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.550505  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.050097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.550897  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:42.050745  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.050826  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:42.051228  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:42.550950  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.551348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.050646  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.050920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.550724  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.550804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.551126  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:44.050930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.051007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.051348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:44.051402  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:44.550445  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.550537  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.550795  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.050730  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.051044  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.550527  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.550601  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.550931  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:46.050582  830558 type.go:168] "Request Body" body=""
	I1210 06:37:46.050725  830558 node_ready.go:38] duration metric: took 6m0.000935284s for node "functional-534748" to be "Ready" ...
	I1210 06:37:46.053848  830558 out.go:203] 
	W1210 06:37:46.056787  830558 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:37:46.056817  830558 out.go:285] * 
	W1210 06:37:46.059108  830558 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:37:46.062914  830558 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044473661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044543782Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044667574Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044742135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044800712Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044861554Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044927113Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044990876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.045066274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.045166583Z" level=info msg="Connect containerd service"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.045549881Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.046211762Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.058392030Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.058569106Z" level=info msg="Start subscribing containerd event"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.058965353Z" level=info msg="Start recovering state"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.067328662Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096107621Z" level=info msg="Start event monitor"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096296103Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096365273Z" level=info msg="Start streaming server"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096441360Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096501923Z" level=info msg="runtime interface starting up..."
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096569509Z" level=info msg="starting plugins..."
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096634125Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:31:43 functional-534748 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.098532655Z" level=info msg="containerd successfully booted in 0.083444s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:47.911483    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:47.912072    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:47.913749    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:47.914267    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:47.915935    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:37:47 up  5:19,  0 user,  load average: 0.43, 0.30, 0.78
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:37:44 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:45 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 10 06:37:45 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:45 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:45 functional-534748 kubelet[8308]: E1210 06:37:45.349349    8308 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:45 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:45 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:46 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 10 06:37:46 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:46 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:46 functional-534748 kubelet[8314]: E1210 06:37:46.133039    8314 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:46 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:46 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:46 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 10 06:37:46 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:46 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:46 functional-534748 kubelet[8319]: E1210 06:37:46.859597    8319 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:46 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:46 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:47 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 10 06:37:47 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:47 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:47 functional-534748 kubelet[8342]: E1210 06:37:47.597886    8342 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:47 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:47 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (334.699207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-534748 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-534748 get po -A: exit status 1 (69.899177ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-534748 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-534748 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-534748 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (299.03938ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdspecific-port1205426619/001:/mount-9p --alsologtostderr -v=1 --port 46464                       │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh            │ functional-634209 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh            │ functional-634209 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh -- ls -la /mount-9p                                                                                                               │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh sudo umount -f /mount-9p                                                                                                          │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount1 --alsologtostderr -v=1                                      │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount2 --alsologtostderr -v=1                                      │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ mount          │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount3 --alsologtostderr -v=1                                      │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh            │ functional-634209 ssh findmnt -T /mount1                                                                                                                │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh findmnt -T /mount2                                                                                                                │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh findmnt -T /mount3                                                                                                                │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ mount          │ -p functional-634209 --kill=true                                                                                                                        │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ update-context │ functional-634209 update-context --alsologtostderr -v=2                                                                                                 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ update-context │ functional-634209 update-context --alsologtostderr -v=2                                                                                                 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ update-context │ functional-634209 update-context --alsologtostderr -v=2                                                                                                 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls --format short --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls --format yaml --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh            │ functional-634209 ssh pgrep buildkitd                                                                                                                   │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ image          │ functional-634209 image ls --format json --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr                                                  │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls --format table --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image          │ functional-634209 image ls                                                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete         │ -p functional-634209                                                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start          │ -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ start          │ -p functional-534748 --alsologtostderr -v=8                                                                                                             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:31 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:31:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:31:40.279311  830558 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:31:40.279505  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279534  830558 out.go:374] Setting ErrFile to fd 2...
	I1210 06:31:40.279556  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279849  830558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:31:40.280242  830558 out.go:368] Setting JSON to false
	I1210 06:31:40.281164  830558 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18825,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:31:40.281259  830558 start.go:143] virtualization:  
	I1210 06:31:40.284710  830558 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:31:40.288411  830558 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:31:40.288473  830558 notify.go:221] Checking for updates...
	I1210 06:31:40.295121  830558 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:31:40.302607  830558 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:40.305522  830558 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:31:40.308355  830558 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:31:40.311698  830558 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:31:40.315095  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:40.315199  830558 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:31:40.353797  830558 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:31:40.353929  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.415859  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.405265704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.415979  830558 docker.go:319] overlay module found
	I1210 06:31:40.419085  830558 out.go:179] * Using the docker driver based on existing profile
	I1210 06:31:40.421970  830558 start.go:309] selected driver: docker
	I1210 06:31:40.421991  830558 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.422101  830558 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:31:40.422196  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.479216  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.46865578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.479663  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:40.479723  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:40.479768  830558 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.482983  830558 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:31:40.485814  830558 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:31:40.488782  830558 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:31:40.491625  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:40.491676  830558 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:31:40.491687  830558 cache.go:65] Caching tarball of preloaded images
	I1210 06:31:40.491736  830558 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:31:40.491792  830558 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:31:40.491804  830558 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:31:40.491917  830558 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:31:40.511808  830558 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:31:40.511830  830558 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:31:40.511847  830558 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:31:40.511881  830558 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:31:40.511943  830558 start.go:364] duration metric: took 39.41µs to acquireMachinesLock for "functional-534748"
	I1210 06:31:40.511975  830558 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:31:40.511985  830558 fix.go:54] fixHost starting: 
	I1210 06:31:40.512241  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:40.529256  830558 fix.go:112] recreateIfNeeded on functional-534748: state=Running err=<nil>
	W1210 06:31:40.529298  830558 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:31:40.532448  830558 out.go:252] * Updating the running docker "functional-534748" container ...
	I1210 06:31:40.532488  830558 machine.go:94] provisionDockerMachine start ...
	I1210 06:31:40.532584  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.550188  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.550543  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.550560  830558 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:31:40.681995  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.682020  830558 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:31:40.682096  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.699737  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.700054  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.700072  830558 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:31:40.843977  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.844083  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.862627  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.862951  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.862975  830558 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:31:40.999052  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:31:40.999087  830558 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:31:40.999116  830558 ubuntu.go:190] setting up certificates
	I1210 06:31:40.999127  830558 provision.go:84] configureAuth start
	I1210 06:31:40.999208  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.018099  830558 provision.go:143] copyHostCerts
	I1210 06:31:41.018148  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018188  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:31:41.018200  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018276  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:31:41.018376  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018397  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:31:41.018412  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018442  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:31:41.018539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018565  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:31:41.018570  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018598  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:31:41.018664  830558 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:31:41.416959  830558 provision.go:177] copyRemoteCerts
	I1210 06:31:41.417039  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:31:41.417085  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.434643  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.530263  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:31:41.530324  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:31:41.547539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:31:41.547601  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:31:41.565054  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:31:41.565115  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:31:41.582586  830558 provision.go:87] duration metric: took 583.43959ms to configureAuth
	I1210 06:31:41.582635  830558 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:31:41.582823  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:41.582837  830558 machine.go:97] duration metric: took 1.050342086s to provisionDockerMachine
	I1210 06:31:41.582845  830558 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:31:41.582857  830558 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:31:41.582912  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:31:41.582957  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.603404  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.698354  830558 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:31:41.701779  830558 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:31:41.701843  830558 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:31:41.701865  830558 command_runner.go:130] > VERSION_ID="12"
	I1210 06:31:41.701877  830558 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:31:41.701883  830558 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:31:41.701887  830558 command_runner.go:130] > ID=debian
	I1210 06:31:41.701891  830558 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:31:41.701896  830558 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:31:41.701906  830558 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:31:41.701968  830558 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:31:41.702000  830558 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:31:41.702014  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:31:41.702084  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:31:41.702172  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:31:41.702185  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /etc/ssl/certs/7867512.pem
	I1210 06:31:41.702261  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:31:41.702269  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> /etc/test/nested/copy/786751/hosts
	I1210 06:31:41.702315  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:31:41.709991  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:41.727898  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:31:41.745651  830558 start.go:296] duration metric: took 162.79042ms for postStartSetup
	I1210 06:31:41.745798  830558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:31:41.745866  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.763287  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.863262  830558 command_runner.go:130] > 19%
	I1210 06:31:41.863843  830558 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:31:41.868394  830558 command_runner.go:130] > 159G
	I1210 06:31:41.868719  830558 fix.go:56] duration metric: took 1.356728705s for fixHost
	I1210 06:31:41.868739  830558 start.go:83] releasing machines lock for "functional-534748", held for 1.35678464s
	I1210 06:31:41.868810  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.887031  830558 ssh_runner.go:195] Run: cat /version.json
	I1210 06:31:41.887084  830558 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:31:41.887092  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.887143  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.906606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.920523  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:42.095537  830558 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:31:42.095667  830558 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 06:31:42.095846  830558 ssh_runner.go:195] Run: systemctl --version
	I1210 06:31:42.103080  830558 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:31:42.103120  830558 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:31:42.103532  830558 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:31:42.109223  830558 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:31:42.109308  830558 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:31:42.109410  830558 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:31:42.119226  830558 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:31:42.119255  830558 start.go:496] detecting cgroup driver to use...
	I1210 06:31:42.119293  830558 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:31:42.119365  830558 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:31:42.140472  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:31:42.156795  830558 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:31:42.156872  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:31:42.175919  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:31:42.191679  830558 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:31:42.319538  830558 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:31:42.438460  830558 docker.go:234] disabling docker service ...
	I1210 06:31:42.438580  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:31:42.456224  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:31:42.471442  830558 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:31:42.599250  830558 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:31:42.716867  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:31:42.729172  830558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:31:42.742342  830558 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 06:31:42.743581  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:31:42.752861  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:31:42.762203  830558 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:31:42.762278  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:31:42.771751  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.780168  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:31:42.788652  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.797230  830558 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:31:42.805633  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:31:42.814368  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:31:42.823074  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:31:42.832256  830558 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:31:42.839109  830558 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:31:42.840076  830558 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:31:42.847676  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:42.968893  830558 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:31:43.099901  830558 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:31:43.099974  830558 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:31:43.103852  830558 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 06:31:43.103874  830558 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:31:43.103881  830558 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1210 06:31:43.103888  830558 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:43.103903  830558 command_runner.go:130] > Access: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103913  830558 command_runner.go:130] > Modify: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103919  830558 command_runner.go:130] > Change: 2025-12-10 06:31:43.062873060 +0000
	I1210 06:31:43.103925  830558 command_runner.go:130] >  Birth: -
	I1210 06:31:43.103951  830558 start.go:564] Will wait 60s for crictl version
	I1210 06:31:43.104009  830558 ssh_runner.go:195] Run: which crictl
	I1210 06:31:43.107381  830558 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:31:43.107477  830558 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:31:43.129358  830558 command_runner.go:130] > Version:  0.1.0
	I1210 06:31:43.129383  830558 command_runner.go:130] > RuntimeName:  containerd
	I1210 06:31:43.129392  830558 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 06:31:43.129396  830558 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:31:43.131610  830558 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:31:43.131682  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.151833  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.153818  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.172831  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.180465  830558 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:31:43.183314  830558 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:31:43.199081  830558 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:31:43.202971  830558 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:31:43.203147  830558 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:31:43.203272  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:43.203351  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.227955  830558 command_runner.go:130] > {
	I1210 06:31:43.227978  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.227982  830558 command_runner.go:130] >     {
	I1210 06:31:43.227991  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.227996  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228002  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.228005  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228009  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228020  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.228023  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228028  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.228032  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228036  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228040  830558 command_runner.go:130] >     },
	I1210 06:31:43.228044  830558 command_runner.go:130] >     {
	I1210 06:31:43.228052  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.228056  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228061  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.228066  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228082  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228094  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.228097  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228102  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.228108  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228112  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228117  830558 command_runner.go:130] >     },
	I1210 06:31:43.228121  830558 command_runner.go:130] >     {
	I1210 06:31:43.228128  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.228135  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228141  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.228153  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228160  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228168  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.228174  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228178  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.228182  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.228186  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228191  830558 command_runner.go:130] >     },
	I1210 06:31:43.228195  830558 command_runner.go:130] >     {
	I1210 06:31:43.228204  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.228208  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228215  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.228219  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228225  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228233  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.228239  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228243  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.228247  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228250  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228254  830558 command_runner.go:130] >       },
	I1210 06:31:43.228258  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228264  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228272  830558 command_runner.go:130] >     },
	I1210 06:31:43.228279  830558 command_runner.go:130] >     {
	I1210 06:31:43.228286  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.228290  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228295  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.228299  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228303  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228313  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.228317  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228321  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.228331  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228340  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228350  830558 command_runner.go:130] >       },
	I1210 06:31:43.228354  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228357  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228361  830558 command_runner.go:130] >     },
	I1210 06:31:43.228364  830558 command_runner.go:130] >     {
	I1210 06:31:43.228371  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.228384  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228390  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.228394  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228398  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228406  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.228412  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228416  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.228420  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228424  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228427  830558 command_runner.go:130] >       },
	I1210 06:31:43.228438  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228443  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228445  830558 command_runner.go:130] >     },
	I1210 06:31:43.228448  830558 command_runner.go:130] >     {
	I1210 06:31:43.228455  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.228463  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228471  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.228475  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228479  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228487  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.228493  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228497  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.228502  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228512  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228515  830558 command_runner.go:130] >     },
	I1210 06:31:43.228518  830558 command_runner.go:130] >     {
	I1210 06:31:43.228525  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.228530  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228538  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.228542  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228546  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228557  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.228566  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228573  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.228577  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228580  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228584  830558 command_runner.go:130] >       },
	I1210 06:31:43.228594  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228598  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228601  830558 command_runner.go:130] >     },
	I1210 06:31:43.228604  830558 command_runner.go:130] >     {
	I1210 06:31:43.228611  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.228617  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228621  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.228627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228631  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228641  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.228647  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228655  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.228659  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228669  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.228673  830558 command_runner.go:130] >       },
	I1210 06:31:43.228677  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228681  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.228686  830558 command_runner.go:130] >     }
	I1210 06:31:43.228689  830558 command_runner.go:130] >   ]
	I1210 06:31:43.228692  830558 command_runner.go:130] > }
	I1210 06:31:43.228843  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.228853  830558 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:31:43.228913  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.254390  830558 command_runner.go:130] > {
	I1210 06:31:43.254411  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.254415  830558 command_runner.go:130] >     {
	I1210 06:31:43.254424  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.254430  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254435  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.254440  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254444  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254453  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.254460  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254488  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.254495  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254499  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254508  830558 command_runner.go:130] >     },
	I1210 06:31:43.254512  830558 command_runner.go:130] >     {
	I1210 06:31:43.254527  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.254534  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254540  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.254543  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254547  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254556  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.254576  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254581  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.254585  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254589  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254600  830558 command_runner.go:130] >     },
	I1210 06:31:43.254603  830558 command_runner.go:130] >     {
	I1210 06:31:43.254609  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.254619  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254624  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.254627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254638  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254649  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.254661  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254665  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.254669  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.254673  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254677  830558 command_runner.go:130] >     },
	I1210 06:31:43.254680  830558 command_runner.go:130] >     {
	I1210 06:31:43.254694  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.254698  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254703  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.254706  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254710  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254721  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.254725  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254729  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.254735  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254739  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254744  830558 command_runner.go:130] >       },
	I1210 06:31:43.254749  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254753  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254765  830558 command_runner.go:130] >     },
	I1210 06:31:43.254768  830558 command_runner.go:130] >     {
	I1210 06:31:43.254779  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.254786  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254791  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.254795  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254798  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254806  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.254810  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254816  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.254820  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254831  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254835  830558 command_runner.go:130] >       },
	I1210 06:31:43.254843  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254850  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254853  830558 command_runner.go:130] >     },
	I1210 06:31:43.254860  830558 command_runner.go:130] >     {
	I1210 06:31:43.254867  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.254873  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254879  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.254882  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254886  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254894  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.254897  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254901  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.254907  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254911  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254916  830558 command_runner.go:130] >       },
	I1210 06:31:43.254920  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254926  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254929  830558 command_runner.go:130] >     },
	I1210 06:31:43.254932  830558 command_runner.go:130] >     {
	I1210 06:31:43.254939  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.254945  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254951  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.254958  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254962  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254970  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.254975  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254979  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.254982  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254987  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254992  830558 command_runner.go:130] >     },
	I1210 06:31:43.254995  830558 command_runner.go:130] >     {
	I1210 06:31:43.255004  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.255008  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255022  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.255026  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255030  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255038  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.255044  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255048  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.255051  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255055  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.255058  830558 command_runner.go:130] >       },
	I1210 06:31:43.255061  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255065  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.255069  830558 command_runner.go:130] >     },
	I1210 06:31:43.255072  830558 command_runner.go:130] >     {
	I1210 06:31:43.255081  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.255088  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255093  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.255098  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255102  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255109  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.255112  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255116  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.255122  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255129  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.255136  830558 command_runner.go:130] >       },
	I1210 06:31:43.255140  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255143  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.255147  830558 command_runner.go:130] >     }
	I1210 06:31:43.255150  830558 command_runner.go:130] >   ]
	I1210 06:31:43.255153  830558 command_runner.go:130] > }
	I1210 06:31:43.257476  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.257497  830558 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:31:43.257505  830558 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:31:43.257607  830558 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:31:43.257674  830558 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:31:43.280486  830558 command_runner.go:130] > {
	I1210 06:31:43.280508  830558 command_runner.go:130] >   "cniconfig": {
	I1210 06:31:43.280515  830558 command_runner.go:130] >     "Networks": [
	I1210 06:31:43.280519  830558 command_runner.go:130] >       {
	I1210 06:31:43.280525  830558 command_runner.go:130] >         "Config": {
	I1210 06:31:43.280531  830558 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 06:31:43.280536  830558 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 06:31:43.280541  830558 command_runner.go:130] >           "Plugins": [
	I1210 06:31:43.280545  830558 command_runner.go:130] >             {
	I1210 06:31:43.280549  830558 command_runner.go:130] >               "Network": {
	I1210 06:31:43.280553  830558 command_runner.go:130] >                 "ipam": {},
	I1210 06:31:43.280572  830558 command_runner.go:130] >                 "type": "loopback"
	I1210 06:31:43.280586  830558 command_runner.go:130] >               },
	I1210 06:31:43.280593  830558 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 06:31:43.280596  830558 command_runner.go:130] >             }
	I1210 06:31:43.280600  830558 command_runner.go:130] >           ],
	I1210 06:31:43.280614  830558 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 06:31:43.280625  830558 command_runner.go:130] >         },
	I1210 06:31:43.280630  830558 command_runner.go:130] >         "IFName": "lo"
	I1210 06:31:43.280633  830558 command_runner.go:130] >       }
	I1210 06:31:43.280637  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280642  830558 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 06:31:43.280652  830558 command_runner.go:130] >     "PluginDirs": [
	I1210 06:31:43.280656  830558 command_runner.go:130] >       "/opt/cni/bin"
	I1210 06:31:43.280660  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280671  830558 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 06:31:43.280679  830558 command_runner.go:130] >     "Prefix": "eth"
	I1210 06:31:43.280682  830558 command_runner.go:130] >   },
	I1210 06:31:43.280686  830558 command_runner.go:130] >   "config": {
	I1210 06:31:43.280693  830558 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 06:31:43.280699  830558 command_runner.go:130] >       "/etc/cdi",
	I1210 06:31:43.280705  830558 command_runner.go:130] >       "/var/run/cdi"
	I1210 06:31:43.280710  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280714  830558 command_runner.go:130] >     "cni": {
	I1210 06:31:43.280725  830558 command_runner.go:130] >       "binDir": "",
	I1210 06:31:43.280729  830558 command_runner.go:130] >       "binDirs": [
	I1210 06:31:43.280732  830558 command_runner.go:130] >         "/opt/cni/bin"
	I1210 06:31:43.280736  830558 command_runner.go:130] >       ],
	I1210 06:31:43.280740  830558 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 06:31:43.280744  830558 command_runner.go:130] >       "confTemplate": "",
	I1210 06:31:43.280747  830558 command_runner.go:130] >       "ipPref": "",
	I1210 06:31:43.280751  830558 command_runner.go:130] >       "maxConfNum": 1,
	I1210 06:31:43.280755  830558 command_runner.go:130] >       "setupSerially": false,
	I1210 06:31:43.280759  830558 command_runner.go:130] >       "useInternalLoopback": false
	I1210 06:31:43.280762  830558 command_runner.go:130] >     },
	I1210 06:31:43.280768  830558 command_runner.go:130] >     "containerd": {
	I1210 06:31:43.280772  830558 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 06:31:43.280776  830558 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 06:31:43.280781  830558 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 06:31:43.280789  830558 command_runner.go:130] >       "runtimes": {
	I1210 06:31:43.280793  830558 command_runner.go:130] >         "runc": {
	I1210 06:31:43.280797  830558 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 06:31:43.280802  830558 command_runner.go:130] >           "PodAnnotations": null,
	I1210 06:31:43.280806  830558 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 06:31:43.280811  830558 command_runner.go:130] >           "cgroupWritable": false,
	I1210 06:31:43.280814  830558 command_runner.go:130] >           "cniConfDir": "",
	I1210 06:31:43.280818  830558 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 06:31:43.280822  830558 command_runner.go:130] >           "io_type": "",
	I1210 06:31:43.280827  830558 command_runner.go:130] >           "options": {
	I1210 06:31:43.280838  830558 command_runner.go:130] >             "BinaryName": "",
	I1210 06:31:43.280850  830558 command_runner.go:130] >             "CriuImagePath": "",
	I1210 06:31:43.280854  830558 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 06:31:43.280858  830558 command_runner.go:130] >             "IoGid": 0,
	I1210 06:31:43.280862  830558 command_runner.go:130] >             "IoUid": 0,
	I1210 06:31:43.280866  830558 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 06:31:43.280872  830558 command_runner.go:130] >             "Root": "",
	I1210 06:31:43.280877  830558 command_runner.go:130] >             "ShimCgroup": "",
	I1210 06:31:43.280883  830558 command_runner.go:130] >             "SystemdCgroup": false
	I1210 06:31:43.280887  830558 command_runner.go:130] >           },
	I1210 06:31:43.280892  830558 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 06:31:43.280898  830558 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 06:31:43.280902  830558 command_runner.go:130] >           "runtimePath": "",
	I1210 06:31:43.280907  830558 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 06:31:43.280912  830558 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 06:31:43.280918  830558 command_runner.go:130] >           "snapshotter": ""
	I1210 06:31:43.280921  830558 command_runner.go:130] >         }
	I1210 06:31:43.280925  830558 command_runner.go:130] >       }
	I1210 06:31:43.280930  830558 command_runner.go:130] >     },
	I1210 06:31:43.280941  830558 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 06:31:43.280949  830558 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 06:31:43.280959  830558 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 06:31:43.280965  830558 command_runner.go:130] >     "disableApparmor": false,
	I1210 06:31:43.280970  830558 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 06:31:43.280976  830558 command_runner.go:130] >     "disableProcMount": false,
	I1210 06:31:43.280983  830558 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 06:31:43.280986  830558 command_runner.go:130] >     "enableCDI": true,
	I1210 06:31:43.280991  830558 command_runner.go:130] >     "enableSelinux": false,
	I1210 06:31:43.280995  830558 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 06:31:43.281002  830558 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 06:31:43.281009  830558 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 06:31:43.281014  830558 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 06:31:43.281021  830558 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 06:31:43.281029  830558 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 06:31:43.281034  830558 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 06:31:43.281040  830558 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281047  830558 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 06:31:43.281052  830558 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281057  830558 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 06:31:43.281062  830558 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 06:31:43.281067  830558 command_runner.go:130] >   },
	I1210 06:31:43.281071  830558 command_runner.go:130] >   "features": {
	I1210 06:31:43.281076  830558 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 06:31:43.281079  830558 command_runner.go:130] >   },
	I1210 06:31:43.281083  830558 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 06:31:43.281095  830558 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281107  830558 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281111  830558 command_runner.go:130] >   "runtimeHandlers": [
	I1210 06:31:43.281114  830558 command_runner.go:130] >     {
	I1210 06:31:43.281118  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281129  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281134  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281137  830558 command_runner.go:130] >       }
	I1210 06:31:43.281142  830558 command_runner.go:130] >     },
	I1210 06:31:43.281145  830558 command_runner.go:130] >     {
	I1210 06:31:43.281148  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281153  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281158  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281161  830558 command_runner.go:130] >       },
	I1210 06:31:43.281168  830558 command_runner.go:130] >       "name": "runc"
	I1210 06:31:43.281171  830558 command_runner.go:130] >     }
	I1210 06:31:43.281174  830558 command_runner.go:130] >   ],
	I1210 06:31:43.281178  830558 command_runner.go:130] >   "status": {
	I1210 06:31:43.281183  830558 command_runner.go:130] >     "conditions": [
	I1210 06:31:43.281186  830558 command_runner.go:130] >       {
	I1210 06:31:43.281190  830558 command_runner.go:130] >         "message": "",
	I1210 06:31:43.281205  830558 command_runner.go:130] >         "reason": "",
	I1210 06:31:43.281209  830558 command_runner.go:130] >         "status": true,
	I1210 06:31:43.281214  830558 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 06:31:43.281220  830558 command_runner.go:130] >       },
	I1210 06:31:43.281224  830558 command_runner.go:130] >       {
	I1210 06:31:43.281230  830558 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 06:31:43.281235  830558 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 06:31:43.281239  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281243  830558 command_runner.go:130] >         "type": "NetworkReady"
	I1210 06:31:43.281246  830558 command_runner.go:130] >       },
	I1210 06:31:43.281249  830558 command_runner.go:130] >       {
	I1210 06:31:43.281271  830558 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 06:31:43.281280  830558 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 06:31:43.281286  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281292  830558 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 06:31:43.281298  830558 command_runner.go:130] >       }
	I1210 06:31:43.281301  830558 command_runner.go:130] >     ]
	I1210 06:31:43.281304  830558 command_runner.go:130] >   }
	I1210 06:31:43.281308  830558 command_runner.go:130] > }
	I1210 06:31:43.283879  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:43.283902  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:43.283924  830558 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:31:43.283950  830558 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:31:43.284076  830558 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:31:43.284154  830558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:31:43.290942  830558 command_runner.go:130] > kubeadm
	I1210 06:31:43.290962  830558 command_runner.go:130] > kubectl
	I1210 06:31:43.290967  830558 command_runner.go:130] > kubelet
	I1210 06:31:43.291913  830558 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:31:43.292013  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:31:43.299680  830558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:31:43.314082  830558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:31:43.330260  830558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 06:31:43.347625  830558 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:31:43.352127  830558 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:31:43.352925  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:43.471703  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:44.297320  830558 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:31:44.297353  830558 certs.go:195] generating shared ca certs ...
	I1210 06:31:44.297370  830558 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:44.297565  830558 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:31:44.297620  830558 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:31:44.297640  830558 certs.go:257] generating profile certs ...
	I1210 06:31:44.297767  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:31:44.297844  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:31:44.297905  830558 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:31:44.297923  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:31:44.297952  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:31:44.297969  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:31:44.297986  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:31:44.297997  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:31:44.298022  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:31:44.298036  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:31:44.298051  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:31:44.298107  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:31:44.298147  830558 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:31:44.298160  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:31:44.298194  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:31:44.298223  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:31:44.298262  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:31:44.298323  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:44.298363  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem -> /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.298380  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.298399  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.299062  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:31:44.319985  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:31:44.339121  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:31:44.360050  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:31:44.381013  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:31:44.398560  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:31:44.416157  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:31:44.433967  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:31:44.452197  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:31:44.470088  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:31:44.487844  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:31:44.505551  830558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:31:44.518440  830558 ssh_runner.go:195] Run: openssl version
	I1210 06:31:44.524638  830558 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:31:44.525053  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.532466  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:31:44.539857  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543663  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543696  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543746  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.585800  830558 command_runner.go:130] > 51391683
	I1210 06:31:44.586242  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:31:44.594754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.602172  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:31:44.609494  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613294  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613412  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613500  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.654003  830558 command_runner.go:130] > 3ec20f2e
	I1210 06:31:44.654513  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:31:44.661754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.668842  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:31:44.676441  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680175  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680286  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680373  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.725770  830558 command_runner.go:130] > b5213941
	I1210 06:31:44.726319  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:31:44.734095  830558 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737911  830558 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737986  830558 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:31:44.737999  830558 command_runner.go:130] > Device: 259,1	Inode: 1050653     Links: 1
	I1210 06:31:44.738007  830558 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:44.738013  830558 command_runner.go:130] > Access: 2025-12-10 06:27:36.644508596 +0000
	I1210 06:31:44.738018  830558 command_runner.go:130] > Modify: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738023  830558 command_runner.go:130] > Change: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738028  830558 command_runner.go:130] >  Birth: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738118  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:31:44.779233  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.779410  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:31:44.820004  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.820457  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:31:44.860741  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.861258  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:31:44.902039  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.902514  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:31:44.943742  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.944234  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:31:44.986027  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.986500  830558 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:44.986586  830558 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:31:44.986679  830558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:31:45.063121  830558 cri.go:89] found id: ""
	I1210 06:31:45.063216  830558 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:31:45.099783  830558 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:31:45.099866  830558 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:31:45.099891  830558 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:31:45.101399  830558 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:31:45.101477  830558 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:31:45.101575  830558 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:31:45.115892  830558 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:31:45.116487  830558 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-534748" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.116718  830558 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "functional-534748" cluster setting kubeconfig missing "functional-534748" context setting]
	I1210 06:31:45.117177  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.117949  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.118213  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.118984  830558 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:31:45.119085  830558 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:31:45.119134  830558 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:31:45.119161  830558 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:31:45.119217  830558 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:31:45.119055  830558 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:31:45.119702  830558 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:31:45.137495  830558 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:31:45.137534  830558 kubeadm.go:602] duration metric: took 36.034287ms to restartPrimaryControlPlane
	I1210 06:31:45.137546  830558 kubeadm.go:403] duration metric: took 151.054854ms to StartCluster
	I1210 06:31:45.137576  830558 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.137653  830558 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.138311  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.138643  830558 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:31:45.139043  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:45.139108  830558 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:31:45.139177  830558 addons.go:70] Setting storage-provisioner=true in profile "functional-534748"
	I1210 06:31:45.139193  830558 addons.go:239] Setting addon storage-provisioner=true in "functional-534748"
	I1210 06:31:45.139221  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.139239  830558 addons.go:70] Setting default-storageclass=true in profile "functional-534748"
	I1210 06:31:45.139259  830558 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-534748"
	I1210 06:31:45.139583  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.139701  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.145574  830558 out.go:179] * Verifying Kubernetes components...
	I1210 06:31:45.148690  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:45.190248  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.190435  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.190756  830558 addons.go:239] Setting addon default-storageclass=true in "functional-534748"
	I1210 06:31:45.190791  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.192137  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.207281  830558 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:31:45.210256  830558 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.210285  830558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:31:45.210364  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.229978  830558 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:45.230080  830558 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:31:45.230235  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.286606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.319378  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.390267  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:45.420552  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.445487  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.049742  830558 node_ready.go:35] waiting up to 6m0s for node "functional-534748" to be "Ready" ...
	I1210 06:31:46.049893  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.049953  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.050234  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050272  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050293  830558 retry.go:31] will retry after 223.621304ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050345  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050359  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050366  830558 retry.go:31] will retry after 336.04204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050483  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.274791  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.331904  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.335903  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.335940  830558 retry.go:31] will retry after 342.637774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.387178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.449259  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.449297  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.449332  830558 retry.go:31] will retry after 384.971387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.550591  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.550669  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.551072  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.679392  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.735005  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.738824  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.738907  830558 retry.go:31] will retry after 477.156435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.835016  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.898535  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.902447  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.902505  830558 retry.go:31] will retry after 587.076477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.050787  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.051147  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.216664  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:47.275932  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.275982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.276003  830558 retry.go:31] will retry after 1.079016213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.490360  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:47.550012  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.551946  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.551982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.552018  830558 retry.go:31] will retry after 1.089774327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.050900  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.051018  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.051381  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.051446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.355639  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:48.413382  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.416787  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.416855  830558 retry.go:31] will retry after 1.248652089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.642762  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:48.712914  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.712955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.712975  830558 retry.go:31] will retry after 929.620731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.050356  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.050675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.550083  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.643743  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:49.666178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:49.715961  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.724279  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.724309  830558 retry.go:31] will retry after 2.037720794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735770  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.735805  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735824  830558 retry.go:31] will retry after 1.943919735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:50.050051  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.050130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.550100  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.550171  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.050020  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.050456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.550105  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.550181  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.550525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.680862  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:51.745585  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.745620  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.745639  830558 retry.go:31] will retry after 2.112684099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.762814  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:51.821569  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.825567  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.825603  830558 retry.go:31] will retry after 2.699110245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:52.050957  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.051054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.051439  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.550045  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.050176  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.050253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.050635  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:53.050697  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.550816  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.551250  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.858630  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:53.918073  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:53.921869  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:53.921905  830558 retry.go:31] will retry after 2.635687612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.525086  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:54.550579  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.550656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.550932  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.585338  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:54.588955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.588990  830558 retry.go:31] will retry after 2.164216453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:55.050098  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.551055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.551113  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:56.050733  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.050815  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.051188  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.549910  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.550302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.558696  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:56.634154  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.634201  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.634222  830558 retry.go:31] will retry after 5.842380515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.753466  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:56.822332  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.822371  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.822391  830558 retry.go:31] will retry after 4.388036914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:57.050861  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.050942  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.051261  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.550079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.049946  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.050302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:58.050362  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:58.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.550513  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.050184  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.050262  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.050626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.550077  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:00.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.550903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.551281  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.050843  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.051196  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.210631  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:01.270135  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:01.273736  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.273765  830558 retry.go:31] will retry after 7.330909522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.550049  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.050246  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.050347  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.050768  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.477366  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:02.540275  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:02.540316  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.540336  830558 retry.go:31] will retry after 13.941322707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.050685  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.050764  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.051097  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.550804  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.550886  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.551211  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.051225  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:04.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.050150  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.050229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.050552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.550574  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.550641  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.550922  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.050749  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.050829  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.051208  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:06.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:06.549940  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.050725  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.050985  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.550782  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.550862  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.551221  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.051376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:08.051435  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:08.550082  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.605823  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:08.661807  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:08.666022  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:08.666054  830558 retry.go:31] will retry after 18.459732711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:09.050632  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.050712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.051043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.550857  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.050543  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.050622  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.050913  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.550123  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.550201  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.550566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:10.550627  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:11.050158  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.050241  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.550370  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.050064  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.550550  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.050834  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.050904  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:13.051271  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:13.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.050138  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.050575  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.550278  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.550375  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.550721  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.050080  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.050169  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.050590  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.550609  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.550687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.551021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:15.551080  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:16.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.050991  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.482787  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:16.542663  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:16.546278  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.546307  830558 retry.go:31] will retry after 7.242230365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.550430  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.550511  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.550807  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.051138  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.550461  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.550553  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.550825  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.050619  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.050699  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.051034  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:18.051091  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:18.550728  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.550817  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.551143  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.050890  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.050958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.051259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.550945  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.551021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.551375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.050449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.549971  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.550340  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:20.550389  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:21.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.550111  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.550187  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.050899  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.050974  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.051306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.550116  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.550195  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.550553  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:22.550614  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:23.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.050118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.050459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.550009  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.550297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.788809  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:23.847955  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:23.851833  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:23.851867  830558 retry.go:31] will retry after 12.516286884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:24.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.050142  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.550248  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.550322  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.550678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.550736  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:25.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.050546  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.550682  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.550758  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.551068  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.050934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.051011  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.051351  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.549946  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.050019  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.050429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:27.050507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:27.126908  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:27.191358  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:27.191398  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.191417  830558 retry.go:31] will retry after 11.065094951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.550078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.050147  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.050242  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.550207  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.550541  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.050535  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:29.050590  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.550933  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.551212  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.550493  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.050667  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.050939  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.050993  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.550742  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.550827  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.551169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.050826  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.050910  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.051237  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.549938  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.550264  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.050091  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.550173  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.550258  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.550581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.550638  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:34.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.050330  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.550070  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.550540  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.050253  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.050340  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.550817  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.551259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.551320  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.049997  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.050415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.369119  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:36.431728  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:36.431764  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.431783  830558 retry.go:31] will retry after 39.090862924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.549963  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.550375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.050652  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.050724  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.050986  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.550839  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.551209  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.049961  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:38.050446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.256706  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:38.315606  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:38.315652  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.315671  830558 retry.go:31] will retry after 24.874249468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.550037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.550353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.050035  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.550165  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.550611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.050932  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.051412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.051484  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.550007  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.550092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.050151  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.050226  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.050542  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.549934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.550007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.050083  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.050160  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.550115  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.550557  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.550613  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.050266  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.050343  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.050403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.549913  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.050255  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.050774  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:45.050854  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.550027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.050187  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.050264  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.050652  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.550359  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.550435  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.550733  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.550791  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:48.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.050612  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.050950  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.550625  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.550703  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.551027  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.050305  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.050380  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.050293  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.050654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:50.050715  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.550658  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.550732  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.550987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.050776  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.051172  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.549919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.549999  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.550341  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.050371  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.550001  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.550075  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:53.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.050100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.550167  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.550226  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.550303  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.550719  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:55.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.050343  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.550553  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.550627  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.550930  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.050724  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.050807  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.550490  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.550765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.550815  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.050617  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.050698  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.051032  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.550880  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.551319  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.050503  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.050584  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.050859  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.550636  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.550712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:58.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.050796  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.051120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.550919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.551267  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.052318  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:33:00.550554  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.550633  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.550978  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.050351  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.050633  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.050680  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:01.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.050197  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.050277  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.050651  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.550347  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.050076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.190859  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:33:03.248648  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248694  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248794  830558 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:03.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.550454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.050739  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.050814  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.051133  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.550977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.551052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.551392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.050105  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.050184  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.050531  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.550528  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.550787  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.550829  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.050557  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.050961  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.550801  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.550879  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.049908  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.050285  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.550098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.050180  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.050261  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.050656  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.050717  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.549966  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.550358  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.550043  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.550501  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.050401  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.550597  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.550682  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.551012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.551066  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:11.050806  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.050883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.051219  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.550460  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.550568  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.550827  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.050716  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.550879  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.550959  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.551385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.551442  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:13.049924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.050301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.549989  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.550389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.050083  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.050417  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.550127  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.550484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.050632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:15.050702  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.522803  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:33:15.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.550344  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.583628  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587769  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587875  830558 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:15.590972  830558 out.go:179] * Enabled addons: 
	I1210 06:33:15.594685  830558 addons.go:530] duration metric: took 1m30.455573868s for enable addons: enabled=[]
	I1210 06:33:16.049998  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.050410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.050382  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.549964  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.550065  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:17.550413  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:18.050065  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.050504  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.550271  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.550617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.050795  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.050864  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.051173  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.550924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.551041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.551366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.551422  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:20.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.050041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.550354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.050040  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.050115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.049927  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.049998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:22.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:22.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.050681  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.549948  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.550276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:24.050460  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:24.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.550552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.550502  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.550576  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.550881  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.050647  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.050720  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.051065  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:26.051131  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:26.550815  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.550883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.551145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.049919  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.050002  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.050335  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.550459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.050846  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.051128  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:28.051173  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:28.550887  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.551314  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.050094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.050428  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.549962  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.550045  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.550327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.050611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.550611  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.550706  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.551062  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:30.551116  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:31.050373  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.050446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.050762  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.550642  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.550963  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.050761  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.050841  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.051145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.550438  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.550527  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.550836  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.050606  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.050687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.051001  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.051058  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:33.550797  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.550872  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.551204  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.050446  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.050542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.050806  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.550570  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.550651  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.551007  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.050684  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.050765  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.051121  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.051180  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:35.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.550049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.050068  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.050156  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.050551  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.550267  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.550341  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.050415  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.050506  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.050765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.550162  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:37.550551  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.050049  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.050196  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.050593  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.550283  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.550352  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.550637  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.550093  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.550174  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.550524  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:39.550606  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.050048  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.055554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:33:40.550566  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.550648  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.551812  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:33:41.050589  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.051002  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.550775  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.550850  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.551122  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:41.551174  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:42.050929  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.051003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.051301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.550943  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.551032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.551344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.049952  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.550011  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.550090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.050208  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.050291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.050657  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:44.050712  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:44.549928  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.550272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.050538  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.550260  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.550359  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.051019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.051104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.051470  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:46.051522  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:46.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.550441  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.050177  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.050256  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.050580  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.550565  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.550895  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.050718  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.050799  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.051139  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.550959  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.551034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.551396  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:48.551454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:49.049969  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.550097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.550429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.050016  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.050484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.550304  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.049996  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.050078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:51.050452  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:51.550024  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.550445  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.049971  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.050360  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.050013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.050485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:53.050541  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.050022  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.050106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.550641  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.050327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.550478  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.550556  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:55.550991  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:56.050594  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.050672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.550810  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.550888  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.551156  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.050906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.050979  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.051317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.049906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.049976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.050249  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.050294  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:58.549945  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.550024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.550385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.050095  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.050176  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.050522  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.550222  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.550309  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.050052  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.050455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:00.050684  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:00.549926  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.550006  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.550355  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.050662  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.050737  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.051064  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.550884  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.551306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.050041  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.550268  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.550561  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:02.550618  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.050297  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.050373  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.050719  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.049984  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.550075  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.550154  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.550510  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.050122  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.050591  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.050642  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.550387  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.550492  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.050542  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.050966  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.550619  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.551056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.050145  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.050214  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.050555  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.550443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.550518  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.050047  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.050151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.050544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.550038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.550495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:09.550556  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.050581  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.050987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.550954  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.050073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.550577  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.550654  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.550920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:11.550968  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:12.050759  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:12.549950  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.550032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.550372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.050891  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.051155  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.550910  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.550990  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.551324  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:13.551384  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:14.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.050372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:14.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.550132  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.550454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.050143  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.550590  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.550665  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.551006  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:16.050139  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.050219  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:16.050651  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:16.550343  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.550746  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.050583  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.050659  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.051004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.550305  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.550379  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.550661  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.050492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.550227  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.550654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:18.550708  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:19.049907  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.050300  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:19.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.050129  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.050682  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.550512  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.550605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.550929  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:20.550983  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:21.050722  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.050804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.051141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:21.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.551258  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.050508  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.050581  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.550614  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.550689  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.551037  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:22.551097  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:23.050847  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.050935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:23.549922  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.050066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.050419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.550230  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.550613  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:25.050894  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:25.051280  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:25.550372  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.550449  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.050683  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.050763  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.051110  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.550564  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.550636  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.550899  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.050671  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.050748  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.051102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.550781  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.550860  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.551195  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:27.551252  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:28.049904  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.049986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.050254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:28.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.050220  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.050298  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.050678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.549921  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.549996  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:30.050073  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:30.050563  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:30.550516  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.550620  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.050272  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.050339  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.050673  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:32.050170  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.050245  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:32.050647  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:32.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.550386  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.550677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.050375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.550519  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:34.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.050710  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.051024  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:34.051085  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:34.550840  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.050092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.550503  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.550574  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.550888  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:36.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.050822  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:36.051321  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:36.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.551056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.551466  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.050811  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.050107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.550034  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.550118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.550387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:38.550431  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:39.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:39.550021  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.050212  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.050299  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.050616  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.550800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.551131  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:40.551184  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:41.050959  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.051050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.051405  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:41.550069  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.550140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.050053  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.050128  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.550426  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:43.049964  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:43.050427  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:43.550060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.550432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.050174  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.050254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.050577  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.550265  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.550337  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:45.050106  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.051475  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:34:45.051555  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:45.550586  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.550670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.050308  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.050387  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.050713  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.550668  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.551031  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.050814  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.051189  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.550459  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.550844  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:47.550902  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:48.050660  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.050735  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.051052  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:48.550831  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.050342  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.050418  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.050723  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.550042  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.550119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.550450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.050296  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:50.050747  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:50.550446  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.550803  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.050575  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.050992  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.550764  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.550839  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.551183  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:52.050947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.051295  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:52.051339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:52.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.550102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.550487  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.050304  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.050648  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.550369  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.050479  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.550177  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.550254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:54.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:55.049960  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.050038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.050307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:55.550536  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.550618  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.550953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.050845  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.549892  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.549977  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.550245  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:57.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:57.050439  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:57.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.550412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.550038  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.550398  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:59.050037  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:59.050536  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:59.550090  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.550165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.550488  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.050082  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.050172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.050532  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.550871  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.551043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.551414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.050056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.550506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:01.550566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:02.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.050334  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.050718  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:02.549994  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.550338  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.550201  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.550618  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:03.550677  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:04.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.050326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:04.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.550073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.550366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.050435  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.550487  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:05.550797  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:06.050578  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.051028  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:06.550698  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.550789  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.551170  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.050527  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.050605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.550670  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.551130  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:07.551186  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:08.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.050023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.050388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:08.550709  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.550783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.551109  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.051017  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.051361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.550147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.550539  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:10.049990  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.050353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:10.050409  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:10.550333  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.550412  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.550769  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.050573  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.050649  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.050998  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.550348  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.550636  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:12.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:12.050544  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:12.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.550407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.050003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.050262  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.550020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.550364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.049948  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.550069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.550374  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:14.550430  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:15.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:15.550549  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.550643  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.550979  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.050330  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.050628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.550008  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.550088  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:16.550501  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:17.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.050312  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.050693  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:17.549908  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.549986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.550246  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.050001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.050297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.550063  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.550458  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:18.550526  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:19.050194  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.050560  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:19.550268  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.550350  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.050392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.050488  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.050847  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:20.551047  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:21.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.050894  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:21.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.551007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.551349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.050275  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:23.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.050584  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:23.050648  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:23.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.550376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.550716  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.050410  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.050504  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.050842  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.550612  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:25.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.050728  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.051015  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:25.051074  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:25.550298  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.550378  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.050574  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.050656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.051021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.550326  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.550392  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.550033  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.550485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:27.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:28.050833  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.051180  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:28.550989  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.551079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.551403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.050086  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.050165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.550827  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.551182  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:29.551227  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:30.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.050563  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:30.550365  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.550440  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.550785  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.050058  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.050147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.050461  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:32.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.050406  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:32.050456  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:32.549918  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.549989  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.550312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.050009  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.050085  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.550628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:34.050314  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.050390  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.050677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:34.050723  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:34.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.550685  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.051309  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.550125  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.550193  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.550209  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.550544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:36.550600  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:37.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.050376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:37.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.550172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.550549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.550505  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.550588  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.550849  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:38.550901  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:39.050640  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.050721  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.051071  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:39.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.550926  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.050625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.050933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.551010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.551608  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:40.551663  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:41.049981  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.050352  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:41.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.550361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.550359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:43.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.050413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:43.050491  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:43.550149  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.550232  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.550536  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.050209  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.050286  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.550377  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.550446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.550724  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:45.050153  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:45.050650  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:45.549952  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.550034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.550414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:46.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.050372  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.055238  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:35:46.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.550675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:47.050432  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.050548  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.050914  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:47.050975  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:47.550717  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.551174  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.049903  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.049980  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.050317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.550065  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.550558  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:49.050850  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.050920  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.051255  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:49.051361  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:49.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.050183  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.050684  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.550583  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.550655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.550936  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.050800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.051144  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.551356  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:51.551411  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:52.050680  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.050756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.051067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:52.550548  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.550625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.050711  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.051146  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.550886  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.551220  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:54.049953  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.050414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:54.050492  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:54.549996  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.051106  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.550346  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.550419  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.550782  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:56.050628  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.051118  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:56.051182  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:56.550940  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.551022  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.551289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.049999  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.550492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.550015  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:58.550507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:59.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.050407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:59.550696  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.550768  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.551102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.050842  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.050924  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.051234  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:01.049954  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.050035  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.050328  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:01.050375  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:01.549990  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.050117  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.050573  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.550013  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.550284  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:03.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.050086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:03.050527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:03.550184  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.550270  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.550632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.050901  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.050978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.051312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.550082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.550542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:05.550839  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:06.050579  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.051012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:06.550829  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.551240  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.050493  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.050573  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.550693  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.550778  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.551124  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:07.551183  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:08.050922  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.051004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.051346  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:08.549944  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.550015  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.550288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.550052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:10.050587  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.050953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:10.051003  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:10.550899  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.550976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.551312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.051047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.051365  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.550062  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.550380  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.050424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.550175  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.550251  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:12.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:13.049890  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.049962  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.050215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:13.549891  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.549970  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.550296  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.050411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.550126  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.550211  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.550507  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:15.050062  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:15.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:15.550556  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.550635  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.050861  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.051148  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.550930  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.551326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.550156  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.550229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.550520  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:17.550565  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:18.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:18.550193  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.049970  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.050368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.550014  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:20.050206  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.050292  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.050696  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:20.050759  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.550733  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.050835  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.050133  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.550116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:22.550527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:23.050001  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.050430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:23.549960  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.050045  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.550232  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.550319  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:24.550726  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:25.049975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.050347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:25.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.550531  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.550872  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.050576  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.050655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.051009  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.550798  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.551067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:26.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:27.050878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.050952  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.051289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:27.550017  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.049942  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.050024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.050288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:29.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.050234  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.050566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:29.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:29.550905  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.550972  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.050116  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.050204  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.550551  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.550956  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:31.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.050353  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.050643  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:31.050689  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:31.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.050146  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.050220  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.050568  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.550834  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.550909  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.551181  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.049926  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.050020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:33.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:34.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.050221  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:34.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.550113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.550403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.050133  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.550293  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.550366  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.550646  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:35.550688  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:36.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:36.550078  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.550152  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.550514  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.050153  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.550003  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.550086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:38.050242  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.050345  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.050820  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:38.050886  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:38.550627  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.550702  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.550965  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.050786  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.051199  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.550826  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.550908  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.551239  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.049947  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.050037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.050342  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.550382  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.550458  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.550826  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:40.550883  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:41.050667  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.050745  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.051117  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:41.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.550958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.050917  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.050997  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.051354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.550117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.550436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:43.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:43.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:43.549987  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.050905  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.051231  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.550482  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.550555  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.550855  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:45.050825  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.050916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.051222  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:45.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:45.550929  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.551345  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.049915  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.050010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.050329  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.549983  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.549925  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:47.550317  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:48.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.050095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:48.550037  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.050116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.050497  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.550104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.550496  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:49.550554  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:50.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.050125  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.050500  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:50.550519  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.550589  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.050803  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.550907  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.550985  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:51.551347  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:52.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.050305  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:52.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.550070  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.550649  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:54.050845  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.050929  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.051278  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:54.051340  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:54.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.550067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.550384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.050384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.550672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.550984  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.050875  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.050955  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.051282  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:56.550406  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:57.050072  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.050499  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:57.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.550054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.049963  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.550064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:58.550486  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:59.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.050244  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.050617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:59.550004  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.550332  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.050088  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.050180  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.050543  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.550848  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.550935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.551280  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:00.551339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:01.050564  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.050644  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.050904  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:01.550685  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.551120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.051039  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.051359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.550512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:03.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:03.050509  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:03.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.550095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.050664  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.050742  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.051055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.550863  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.551272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.049983  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.050389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.550411  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.550500  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.550764  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:05.550808  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:06.050441  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.050533  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.050866  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:06.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.551104  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.050870  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.050944  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.051251  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.550410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:08.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.050239  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.050601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:08.050664  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:08.549949  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.550357  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.550204  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.550291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.550711  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:10.050422  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.050521  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:10.050899  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:10.550710  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.550785  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.551141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.050942  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.051363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.050103  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.550253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.550680  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:12.550735  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:13.049956  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.050028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:13.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.550413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.050614  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.550902  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.551307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:14.551376  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:15.050054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.050140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.050549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:15.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.550756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.551093  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.050865  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.050946  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.051228  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.550004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.550336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:17.049921  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.050336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:17.050393  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:17.550046  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.550394  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.050366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.550054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.550489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:19.050104  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.050185  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.050515  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:19.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:19.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.550665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.050424  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.050518  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.050884  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.550762  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.551162  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:21.050936  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.051012  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.051344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:21.051398  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:21.550076  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.550149  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.550491  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.050770  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.051151  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.550952  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.551036  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.551372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.050110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.550623  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.551091  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:23.551140  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:24.050873  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.050957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.051303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:24.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.050726  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.050795  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.051103  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:26.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.050517  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:26.050574  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:26.550022  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.550096  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.550377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.050089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.550158  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.550236  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.550601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.550041  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.550120  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.550456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:28.550530  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:29.050006  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.050087  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.050404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:29.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.050099  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.050189  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.550673  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.551134  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:30.551190  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:31.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.050960  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.051274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:31.550000  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.550131  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.550681  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.550771  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.551081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:33.050860  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.050934  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:33.051305  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:33.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.550448  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.050046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.550051  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.550376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.550550  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.550892  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:35.550953  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:36.050690  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.050767  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.051081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:36.550920  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.551001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.551377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.050702  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.050783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.051058  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.550812  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.550889  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:37.551281  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:38.049987  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:38.550706  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.550780  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.551043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.050813  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.050899  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.051232  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.549927  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.550005  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.550337  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:40.050658  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.051035  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:40.051084  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:40.549980  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.550072  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.550505  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.050097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.550897  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:42.050745  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.050826  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:42.051228  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:42.550950  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.551348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.050646  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.050920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.550724  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.550804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.551126  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:44.050930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.051007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.051348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:44.051402  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:44.550445  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.550537  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.550795  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.050730  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.051044  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.550527  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.550601  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.550931  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:46.050582  830558 type.go:168] "Request Body" body=""
	I1210 06:37:46.050725  830558 node_ready.go:38] duration metric: took 6m0.000935284s for node "functional-534748" to be "Ready" ...
	I1210 06:37:46.053848  830558 out.go:203] 
	W1210 06:37:46.056787  830558 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:37:46.056817  830558 out.go:285] * 
	W1210 06:37:46.059108  830558 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:37:46.062914  830558 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044473661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044543782Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044667574Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044742135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044800712Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044861554Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044927113Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.044990876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.045066274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.045166583Z" level=info msg="Connect containerd service"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.045549881Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.046211762Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.058392030Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.058569106Z" level=info msg="Start subscribing containerd event"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.058965353Z" level=info msg="Start recovering state"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.067328662Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096107621Z" level=info msg="Start event monitor"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096296103Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096365273Z" level=info msg="Start streaming server"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096441360Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096501923Z" level=info msg="runtime interface starting up..."
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096569509Z" level=info msg="starting plugins..."
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.096634125Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:31:43 functional-534748 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 06:31:43 functional-534748 containerd[5224]: time="2025-12-10T06:31:43.098532655Z" level=info msg="containerd successfully booted in 0.083444s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:50.315287    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:50.316016    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:50.317884    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:50.318404    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:50.320049    8561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:37:50 up  5:19,  0 user,  load average: 0.43, 0.30, 0.78
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:37:46 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:47 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 10 06:37:47 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:47 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:47 functional-534748 kubelet[8342]: E1210 06:37:47.597886    8342 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:47 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:47 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:48 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 10 06:37:48 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:48 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:48 functional-534748 kubelet[8437]: E1210 06:37:48.354584    8437 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:48 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:48 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:49 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 10 06:37:49 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:49 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:49 functional-534748 kubelet[8458]: E1210 06:37:49.112510    8458 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:49 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:49 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:49 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 10 06:37:49 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:49 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:49 functional-534748 kubelet[8479]: E1210 06:37:49.867869    8479 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:49 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:49 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (345.600114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 kubectl -- --context functional-534748 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 kubectl -- --context functional-534748 get pods: exit status 1 (129.737059ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-534748 kubectl -- --context functional-534748 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (299.710407ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-634209 image ls --format short --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls --format yaml --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ functional-634209 ssh pgrep buildkitd                                                                                                                   │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ image   │ functional-634209 image ls --format json --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr                                                  │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls --format table --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls                                                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p functional-634209                                                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ start   │ -p functional-534748 --alsologtostderr -v=8                                                                                                             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:31 UTC │                     │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:latest                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add minikube-local-cache-test:functional-534748                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache delete minikube-local-cache-test:functional-534748                                                                              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl images                                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	│ cache   │ functional-534748 cache reload                                                                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ kubectl │ functional-534748 kubectl -- --context functional-534748 get pods                                                                                       │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:31:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:31:40.279311  830558 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:31:40.279505  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279534  830558 out.go:374] Setting ErrFile to fd 2...
	I1210 06:31:40.279556  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279849  830558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:31:40.280242  830558 out.go:368] Setting JSON to false
	I1210 06:31:40.281164  830558 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18825,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:31:40.281259  830558 start.go:143] virtualization:  
	I1210 06:31:40.284710  830558 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:31:40.288411  830558 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:31:40.288473  830558 notify.go:221] Checking for updates...
	I1210 06:31:40.295121  830558 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:31:40.302607  830558 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:40.305522  830558 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:31:40.308355  830558 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:31:40.311698  830558 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:31:40.315095  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:40.315199  830558 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:31:40.353797  830558 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:31:40.353929  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.415859  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.405265704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.415979  830558 docker.go:319] overlay module found
	I1210 06:31:40.419085  830558 out.go:179] * Using the docker driver based on existing profile
	I1210 06:31:40.421970  830558 start.go:309] selected driver: docker
	I1210 06:31:40.421991  830558 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.422101  830558 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:31:40.422196  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.479216  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.46865578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.479663  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:40.479723  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:40.479768  830558 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.482983  830558 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:31:40.485814  830558 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:31:40.488782  830558 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:31:40.491625  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:40.491676  830558 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:31:40.491687  830558 cache.go:65] Caching tarball of preloaded images
	I1210 06:31:40.491736  830558 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:31:40.491792  830558 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:31:40.491804  830558 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:31:40.491917  830558 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:31:40.511808  830558 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:31:40.511830  830558 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:31:40.511847  830558 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:31:40.511881  830558 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:31:40.511943  830558 start.go:364] duration metric: took 39.41µs to acquireMachinesLock for "functional-534748"
	I1210 06:31:40.511975  830558 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:31:40.511985  830558 fix.go:54] fixHost starting: 
	I1210 06:31:40.512241  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:40.529256  830558 fix.go:112] recreateIfNeeded on functional-534748: state=Running err=<nil>
	W1210 06:31:40.529298  830558 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:31:40.532448  830558 out.go:252] * Updating the running docker "functional-534748" container ...
	I1210 06:31:40.532488  830558 machine.go:94] provisionDockerMachine start ...
	I1210 06:31:40.532584  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.550188  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.550543  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.550560  830558 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:31:40.681995  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.682020  830558 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:31:40.682096  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.699737  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.700054  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.700072  830558 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:31:40.843977  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.844083  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.862627  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.862951  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.862975  830558 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:31:40.999052  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:31:40.999087  830558 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:31:40.999116  830558 ubuntu.go:190] setting up certificates
	I1210 06:31:40.999127  830558 provision.go:84] configureAuth start
	I1210 06:31:40.999208  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.018099  830558 provision.go:143] copyHostCerts
	I1210 06:31:41.018148  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018188  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:31:41.018200  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018276  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:31:41.018376  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018397  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:31:41.018412  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018442  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:31:41.018539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018565  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:31:41.018570  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018598  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:31:41.018664  830558 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:31:41.416959  830558 provision.go:177] copyRemoteCerts
	I1210 06:31:41.417039  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:31:41.417085  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.434643  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.530263  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:31:41.530324  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:31:41.547539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:31:41.547601  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:31:41.565054  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:31:41.565115  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:31:41.582586  830558 provision.go:87] duration metric: took 583.43959ms to configureAuth
	I1210 06:31:41.582635  830558 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:31:41.582823  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:41.582837  830558 machine.go:97] duration metric: took 1.050342086s to provisionDockerMachine
	I1210 06:31:41.582845  830558 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:31:41.582857  830558 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:31:41.582912  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:31:41.582957  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.603404  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.698354  830558 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:31:41.701779  830558 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:31:41.701843  830558 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:31:41.701865  830558 command_runner.go:130] > VERSION_ID="12"
	I1210 06:31:41.701877  830558 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:31:41.701883  830558 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:31:41.701887  830558 command_runner.go:130] > ID=debian
	I1210 06:31:41.701891  830558 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:31:41.701896  830558 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:31:41.701906  830558 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:31:41.701968  830558 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:31:41.702000  830558 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:31:41.702014  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:31:41.702084  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:31:41.702172  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:31:41.702185  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /etc/ssl/certs/7867512.pem
	I1210 06:31:41.702261  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:31:41.702269  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> /etc/test/nested/copy/786751/hosts
	I1210 06:31:41.702315  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:31:41.709991  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:41.727898  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:31:41.745651  830558 start.go:296] duration metric: took 162.79042ms for postStartSetup
	I1210 06:31:41.745798  830558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:31:41.745866  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.763287  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.863262  830558 command_runner.go:130] > 19%
	I1210 06:31:41.863843  830558 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:31:41.868394  830558 command_runner.go:130] > 159G
	I1210 06:31:41.868719  830558 fix.go:56] duration metric: took 1.356728705s for fixHost
	I1210 06:31:41.868739  830558 start.go:83] releasing machines lock for "functional-534748", held for 1.35678464s
	I1210 06:31:41.868810  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.887031  830558 ssh_runner.go:195] Run: cat /version.json
	I1210 06:31:41.887084  830558 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:31:41.887092  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.887143  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.906606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.920523  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:42.095537  830558 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:31:42.095667  830558 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 06:31:42.095846  830558 ssh_runner.go:195] Run: systemctl --version
	I1210 06:31:42.103080  830558 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:31:42.103120  830558 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:31:42.103532  830558 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:31:42.109223  830558 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:31:42.109308  830558 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:31:42.109410  830558 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:31:42.119226  830558 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:31:42.119255  830558 start.go:496] detecting cgroup driver to use...
	I1210 06:31:42.119293  830558 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:31:42.119365  830558 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:31:42.140472  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:31:42.156795  830558 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:31:42.156872  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:31:42.175919  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:31:42.191679  830558 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:31:42.319538  830558 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:31:42.438460  830558 docker.go:234] disabling docker service ...
	I1210 06:31:42.438580  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:31:42.456224  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:31:42.471442  830558 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:31:42.599250  830558 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:31:42.716867  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:31:42.729172  830558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:31:42.742342  830558 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 06:31:42.743581  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:31:42.752861  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:31:42.762203  830558 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:31:42.762278  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:31:42.771751  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.780168  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:31:42.788652  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.797230  830558 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:31:42.805633  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:31:42.814368  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:31:42.823074  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:31:42.832256  830558 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:31:42.839109  830558 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:31:42.840076  830558 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:31:42.847676  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:42.968893  830558 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:31:43.099901  830558 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:31:43.099974  830558 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:31:43.103852  830558 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 06:31:43.103874  830558 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:31:43.103881  830558 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1210 06:31:43.103888  830558 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:43.103903  830558 command_runner.go:130] > Access: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103913  830558 command_runner.go:130] > Modify: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103919  830558 command_runner.go:130] > Change: 2025-12-10 06:31:43.062873060 +0000
	I1210 06:31:43.103925  830558 command_runner.go:130] >  Birth: -
	I1210 06:31:43.103951  830558 start.go:564] Will wait 60s for crictl version
	I1210 06:31:43.104009  830558 ssh_runner.go:195] Run: which crictl
	I1210 06:31:43.107381  830558 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:31:43.107477  830558 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:31:43.129358  830558 command_runner.go:130] > Version:  0.1.0
	I1210 06:31:43.129383  830558 command_runner.go:130] > RuntimeName:  containerd
	I1210 06:31:43.129392  830558 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 06:31:43.129396  830558 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:31:43.131610  830558 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:31:43.131682  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.151833  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.153818  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.172831  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.180465  830558 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:31:43.183314  830558 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:31:43.199081  830558 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:31:43.202971  830558 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:31:43.203147  830558 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:31:43.203272  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:43.203351  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.227955  830558 command_runner.go:130] > {
	I1210 06:31:43.227978  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.227982  830558 command_runner.go:130] >     {
	I1210 06:31:43.227991  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.227996  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228002  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.228005  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228009  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228020  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.228023  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228028  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.228032  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228036  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228040  830558 command_runner.go:130] >     },
	I1210 06:31:43.228044  830558 command_runner.go:130] >     {
	I1210 06:31:43.228052  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.228056  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228061  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.228066  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228082  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228094  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.228097  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228102  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.228108  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228112  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228117  830558 command_runner.go:130] >     },
	I1210 06:31:43.228121  830558 command_runner.go:130] >     {
	I1210 06:31:43.228128  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.228135  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228141  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.228153  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228160  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228168  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.228174  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228178  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.228182  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.228186  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228191  830558 command_runner.go:130] >     },
	I1210 06:31:43.228195  830558 command_runner.go:130] >     {
	I1210 06:31:43.228204  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.228208  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228215  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.228219  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228225  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228233  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.228239  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228243  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.228247  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228250  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228254  830558 command_runner.go:130] >       },
	I1210 06:31:43.228258  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228264  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228272  830558 command_runner.go:130] >     },
	I1210 06:31:43.228279  830558 command_runner.go:130] >     {
	I1210 06:31:43.228286  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.228290  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228295  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.228299  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228303  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228313  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.228317  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228321  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.228331  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228340  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228350  830558 command_runner.go:130] >       },
	I1210 06:31:43.228354  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228357  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228361  830558 command_runner.go:130] >     },
	I1210 06:31:43.228364  830558 command_runner.go:130] >     {
	I1210 06:31:43.228371  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.228384  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228390  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.228394  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228398  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228406  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.228412  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228416  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.228420  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228424  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228427  830558 command_runner.go:130] >       },
	I1210 06:31:43.228438  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228443  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228445  830558 command_runner.go:130] >     },
	I1210 06:31:43.228448  830558 command_runner.go:130] >     {
	I1210 06:31:43.228455  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.228463  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228471  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.228475  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228479  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228487  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.228493  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228497  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.228502  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228512  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228515  830558 command_runner.go:130] >     },
	I1210 06:31:43.228518  830558 command_runner.go:130] >     {
	I1210 06:31:43.228525  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.228530  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228538  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.228542  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228546  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228557  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.228566  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228573  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.228577  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228580  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228584  830558 command_runner.go:130] >       },
	I1210 06:31:43.228594  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228598  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228601  830558 command_runner.go:130] >     },
	I1210 06:31:43.228604  830558 command_runner.go:130] >     {
	I1210 06:31:43.228611  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.228617  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228621  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.228627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228631  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228641  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.228647  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228655  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.228659  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228669  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.228673  830558 command_runner.go:130] >       },
	I1210 06:31:43.228677  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228681  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.228686  830558 command_runner.go:130] >     }
	I1210 06:31:43.228689  830558 command_runner.go:130] >   ]
	I1210 06:31:43.228692  830558 command_runner.go:130] > }
	I1210 06:31:43.228843  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.228853  830558 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:31:43.228913  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.254390  830558 command_runner.go:130] > {
	I1210 06:31:43.254411  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.254415  830558 command_runner.go:130] >     {
	I1210 06:31:43.254424  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.254430  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254435  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.254440  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254444  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254453  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.254460  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254488  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.254495  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254499  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254508  830558 command_runner.go:130] >     },
	I1210 06:31:43.254512  830558 command_runner.go:130] >     {
	I1210 06:31:43.254527  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.254534  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254540  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.254543  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254547  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254556  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.254576  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254581  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.254585  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254589  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254600  830558 command_runner.go:130] >     },
	I1210 06:31:43.254603  830558 command_runner.go:130] >     {
	I1210 06:31:43.254609  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.254619  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254624  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.254627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254638  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254649  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.254661  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254665  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.254669  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.254673  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254677  830558 command_runner.go:130] >     },
	I1210 06:31:43.254680  830558 command_runner.go:130] >     {
	I1210 06:31:43.254694  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.254698  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254703  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.254706  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254710  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254721  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.254725  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254729  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.254735  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254739  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254744  830558 command_runner.go:130] >       },
	I1210 06:31:43.254749  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254753  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254765  830558 command_runner.go:130] >     },
	I1210 06:31:43.254768  830558 command_runner.go:130] >     {
	I1210 06:31:43.254779  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.254786  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254791  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.254795  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254798  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254806  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.254810  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254816  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.254820  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254831  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254835  830558 command_runner.go:130] >       },
	I1210 06:31:43.254843  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254850  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254853  830558 command_runner.go:130] >     },
	I1210 06:31:43.254860  830558 command_runner.go:130] >     {
	I1210 06:31:43.254867  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.254873  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254879  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.254882  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254886  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254894  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.254897  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254901  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.254907  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254911  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254916  830558 command_runner.go:130] >       },
	I1210 06:31:43.254920  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254926  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254929  830558 command_runner.go:130] >     },
	I1210 06:31:43.254932  830558 command_runner.go:130] >     {
	I1210 06:31:43.254939  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.254945  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254951  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.254958  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254962  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254970  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.254975  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254979  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.254982  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254987  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254992  830558 command_runner.go:130] >     },
	I1210 06:31:43.254995  830558 command_runner.go:130] >     {
	I1210 06:31:43.255004  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.255008  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255022  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.255026  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255030  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255038  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.255044  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255048  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.255051  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255055  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.255058  830558 command_runner.go:130] >       },
	I1210 06:31:43.255061  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255065  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.255069  830558 command_runner.go:130] >     },
	I1210 06:31:43.255072  830558 command_runner.go:130] >     {
	I1210 06:31:43.255081  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.255088  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255093  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.255098  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255102  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255109  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.255112  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255116  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.255122  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255129  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.255136  830558 command_runner.go:130] >       },
	I1210 06:31:43.255140  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255143  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.255147  830558 command_runner.go:130] >     }
	I1210 06:31:43.255150  830558 command_runner.go:130] >   ]
	I1210 06:31:43.255153  830558 command_runner.go:130] > }
	I1210 06:31:43.257476  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.257497  830558 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:31:43.257505  830558 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:31:43.257607  830558 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:31:43.257674  830558 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:31:43.280486  830558 command_runner.go:130] > {
	I1210 06:31:43.280508  830558 command_runner.go:130] >   "cniconfig": {
	I1210 06:31:43.280515  830558 command_runner.go:130] >     "Networks": [
	I1210 06:31:43.280519  830558 command_runner.go:130] >       {
	I1210 06:31:43.280525  830558 command_runner.go:130] >         "Config": {
	I1210 06:31:43.280531  830558 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 06:31:43.280536  830558 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 06:31:43.280541  830558 command_runner.go:130] >           "Plugins": [
	I1210 06:31:43.280545  830558 command_runner.go:130] >             {
	I1210 06:31:43.280549  830558 command_runner.go:130] >               "Network": {
	I1210 06:31:43.280553  830558 command_runner.go:130] >                 "ipam": {},
	I1210 06:31:43.280572  830558 command_runner.go:130] >                 "type": "loopback"
	I1210 06:31:43.280586  830558 command_runner.go:130] >               },
	I1210 06:31:43.280593  830558 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 06:31:43.280596  830558 command_runner.go:130] >             }
	I1210 06:31:43.280600  830558 command_runner.go:130] >           ],
	I1210 06:31:43.280614  830558 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 06:31:43.280625  830558 command_runner.go:130] >         },
	I1210 06:31:43.280630  830558 command_runner.go:130] >         "IFName": "lo"
	I1210 06:31:43.280633  830558 command_runner.go:130] >       }
	I1210 06:31:43.280637  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280642  830558 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 06:31:43.280652  830558 command_runner.go:130] >     "PluginDirs": [
	I1210 06:31:43.280656  830558 command_runner.go:130] >       "/opt/cni/bin"
	I1210 06:31:43.280660  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280671  830558 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 06:31:43.280679  830558 command_runner.go:130] >     "Prefix": "eth"
	I1210 06:31:43.280682  830558 command_runner.go:130] >   },
	I1210 06:31:43.280686  830558 command_runner.go:130] >   "config": {
	I1210 06:31:43.280693  830558 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 06:31:43.280699  830558 command_runner.go:130] >       "/etc/cdi",
	I1210 06:31:43.280705  830558 command_runner.go:130] >       "/var/run/cdi"
	I1210 06:31:43.280710  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280714  830558 command_runner.go:130] >     "cni": {
	I1210 06:31:43.280725  830558 command_runner.go:130] >       "binDir": "",
	I1210 06:31:43.280729  830558 command_runner.go:130] >       "binDirs": [
	I1210 06:31:43.280732  830558 command_runner.go:130] >         "/opt/cni/bin"
	I1210 06:31:43.280736  830558 command_runner.go:130] >       ],
	I1210 06:31:43.280740  830558 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 06:31:43.280744  830558 command_runner.go:130] >       "confTemplate": "",
	I1210 06:31:43.280747  830558 command_runner.go:130] >       "ipPref": "",
	I1210 06:31:43.280751  830558 command_runner.go:130] >       "maxConfNum": 1,
	I1210 06:31:43.280755  830558 command_runner.go:130] >       "setupSerially": false,
	I1210 06:31:43.280759  830558 command_runner.go:130] >       "useInternalLoopback": false
	I1210 06:31:43.280762  830558 command_runner.go:130] >     },
	I1210 06:31:43.280768  830558 command_runner.go:130] >     "containerd": {
	I1210 06:31:43.280772  830558 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 06:31:43.280776  830558 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 06:31:43.280781  830558 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 06:31:43.280789  830558 command_runner.go:130] >       "runtimes": {
	I1210 06:31:43.280793  830558 command_runner.go:130] >         "runc": {
	I1210 06:31:43.280797  830558 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 06:31:43.280802  830558 command_runner.go:130] >           "PodAnnotations": null,
	I1210 06:31:43.280806  830558 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 06:31:43.280811  830558 command_runner.go:130] >           "cgroupWritable": false,
	I1210 06:31:43.280814  830558 command_runner.go:130] >           "cniConfDir": "",
	I1210 06:31:43.280818  830558 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 06:31:43.280822  830558 command_runner.go:130] >           "io_type": "",
	I1210 06:31:43.280827  830558 command_runner.go:130] >           "options": {
	I1210 06:31:43.280838  830558 command_runner.go:130] >             "BinaryName": "",
	I1210 06:31:43.280850  830558 command_runner.go:130] >             "CriuImagePath": "",
	I1210 06:31:43.280854  830558 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 06:31:43.280858  830558 command_runner.go:130] >             "IoGid": 0,
	I1210 06:31:43.280862  830558 command_runner.go:130] >             "IoUid": 0,
	I1210 06:31:43.280866  830558 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 06:31:43.280872  830558 command_runner.go:130] >             "Root": "",
	I1210 06:31:43.280877  830558 command_runner.go:130] >             "ShimCgroup": "",
	I1210 06:31:43.280883  830558 command_runner.go:130] >             "SystemdCgroup": false
	I1210 06:31:43.280887  830558 command_runner.go:130] >           },
	I1210 06:31:43.280892  830558 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 06:31:43.280898  830558 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 06:31:43.280902  830558 command_runner.go:130] >           "runtimePath": "",
	I1210 06:31:43.280907  830558 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 06:31:43.280912  830558 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 06:31:43.280918  830558 command_runner.go:130] >           "snapshotter": ""
	I1210 06:31:43.280921  830558 command_runner.go:130] >         }
	I1210 06:31:43.280925  830558 command_runner.go:130] >       }
	I1210 06:31:43.280930  830558 command_runner.go:130] >     },
	I1210 06:31:43.280941  830558 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 06:31:43.280949  830558 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 06:31:43.280959  830558 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 06:31:43.280965  830558 command_runner.go:130] >     "disableApparmor": false,
	I1210 06:31:43.280970  830558 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 06:31:43.280976  830558 command_runner.go:130] >     "disableProcMount": false,
	I1210 06:31:43.280983  830558 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 06:31:43.280986  830558 command_runner.go:130] >     "enableCDI": true,
	I1210 06:31:43.280991  830558 command_runner.go:130] >     "enableSelinux": false,
	I1210 06:31:43.280995  830558 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 06:31:43.281002  830558 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 06:31:43.281009  830558 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 06:31:43.281014  830558 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 06:31:43.281021  830558 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 06:31:43.281029  830558 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 06:31:43.281034  830558 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 06:31:43.281040  830558 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281047  830558 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 06:31:43.281052  830558 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281057  830558 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 06:31:43.281062  830558 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 06:31:43.281067  830558 command_runner.go:130] >   },
	I1210 06:31:43.281071  830558 command_runner.go:130] >   "features": {
	I1210 06:31:43.281076  830558 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 06:31:43.281079  830558 command_runner.go:130] >   },
	I1210 06:31:43.281083  830558 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 06:31:43.281095  830558 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281107  830558 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281111  830558 command_runner.go:130] >   "runtimeHandlers": [
	I1210 06:31:43.281114  830558 command_runner.go:130] >     {
	I1210 06:31:43.281118  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281129  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281134  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281137  830558 command_runner.go:130] >       }
	I1210 06:31:43.281142  830558 command_runner.go:130] >     },
	I1210 06:31:43.281145  830558 command_runner.go:130] >     {
	I1210 06:31:43.281148  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281153  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281158  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281161  830558 command_runner.go:130] >       },
	I1210 06:31:43.281168  830558 command_runner.go:130] >       "name": "runc"
	I1210 06:31:43.281171  830558 command_runner.go:130] >     }
	I1210 06:31:43.281174  830558 command_runner.go:130] >   ],
	I1210 06:31:43.281178  830558 command_runner.go:130] >   "status": {
	I1210 06:31:43.281183  830558 command_runner.go:130] >     "conditions": [
	I1210 06:31:43.281186  830558 command_runner.go:130] >       {
	I1210 06:31:43.281190  830558 command_runner.go:130] >         "message": "",
	I1210 06:31:43.281205  830558 command_runner.go:130] >         "reason": "",
	I1210 06:31:43.281209  830558 command_runner.go:130] >         "status": true,
	I1210 06:31:43.281214  830558 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 06:31:43.281220  830558 command_runner.go:130] >       },
	I1210 06:31:43.281224  830558 command_runner.go:130] >       {
	I1210 06:31:43.281230  830558 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 06:31:43.281235  830558 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 06:31:43.281239  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281243  830558 command_runner.go:130] >         "type": "NetworkReady"
	I1210 06:31:43.281246  830558 command_runner.go:130] >       },
	I1210 06:31:43.281249  830558 command_runner.go:130] >       {
	I1210 06:31:43.281271  830558 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 06:31:43.281280  830558 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 06:31:43.281286  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281292  830558 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 06:31:43.281298  830558 command_runner.go:130] >       }
	I1210 06:31:43.281301  830558 command_runner.go:130] >     ]
	I1210 06:31:43.281304  830558 command_runner.go:130] >   }
	I1210 06:31:43.281308  830558 command_runner.go:130] > }
	I1210 06:31:43.283879  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:43.283902  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:43.283924  830558 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:31:43.283950  830558 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:31:43.284076  830558 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:31:43.284154  830558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:31:43.290942  830558 command_runner.go:130] > kubeadm
	I1210 06:31:43.290962  830558 command_runner.go:130] > kubectl
	I1210 06:31:43.290967  830558 command_runner.go:130] > kubelet
	I1210 06:31:43.291913  830558 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:31:43.292013  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:31:43.299680  830558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:31:43.314082  830558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:31:43.330260  830558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 06:31:43.347625  830558 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:31:43.352127  830558 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:31:43.352925  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:43.471703  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:44.297320  830558 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:31:44.297353  830558 certs.go:195] generating shared ca certs ...
	I1210 06:31:44.297370  830558 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:44.297565  830558 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:31:44.297620  830558 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:31:44.297640  830558 certs.go:257] generating profile certs ...
	I1210 06:31:44.297767  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:31:44.297844  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:31:44.297905  830558 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:31:44.297923  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:31:44.297952  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:31:44.297969  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:31:44.297986  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:31:44.297997  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:31:44.298022  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:31:44.298036  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:31:44.298051  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:31:44.298107  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:31:44.298147  830558 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:31:44.298160  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:31:44.298194  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:31:44.298223  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:31:44.298262  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:31:44.298323  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:44.298363  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem -> /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.298380  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.298399  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.299062  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:31:44.319985  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:31:44.339121  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:31:44.360050  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:31:44.381013  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:31:44.398560  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:31:44.416157  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:31:44.433967  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:31:44.452197  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:31:44.470088  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:31:44.487844  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:31:44.505551  830558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:31:44.518440  830558 ssh_runner.go:195] Run: openssl version
	I1210 06:31:44.524638  830558 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:31:44.525053  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.532466  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:31:44.539857  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543663  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543696  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543746  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.585800  830558 command_runner.go:130] > 51391683
	I1210 06:31:44.586242  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:31:44.594754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.602172  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:31:44.609494  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613294  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613412  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613500  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.654003  830558 command_runner.go:130] > 3ec20f2e
	I1210 06:31:44.654513  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:31:44.661754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.668842  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:31:44.676441  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680175  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680286  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680373  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.725770  830558 command_runner.go:130] > b5213941
	I1210 06:31:44.726319  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:31:44.734095  830558 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737911  830558 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737986  830558 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:31:44.737999  830558 command_runner.go:130] > Device: 259,1	Inode: 1050653     Links: 1
	I1210 06:31:44.738007  830558 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:44.738013  830558 command_runner.go:130] > Access: 2025-12-10 06:27:36.644508596 +0000
	I1210 06:31:44.738018  830558 command_runner.go:130] > Modify: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738023  830558 command_runner.go:130] > Change: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738028  830558 command_runner.go:130] >  Birth: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738118  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:31:44.779233  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.779410  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:31:44.820004  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.820457  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:31:44.860741  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.861258  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:31:44.902039  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.902514  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:31:44.943742  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.944234  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:31:44.986027  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.986500  830558 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:44.986586  830558 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:31:44.986679  830558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:31:45.063121  830558 cri.go:89] found id: ""
	I1210 06:31:45.063216  830558 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:31:45.099783  830558 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:31:45.099866  830558 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:31:45.099891  830558 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:31:45.101399  830558 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:31:45.101477  830558 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:31:45.101575  830558 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:31:45.115892  830558 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:31:45.116487  830558 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-534748" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.116718  830558 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "functional-534748" cluster setting kubeconfig missing "functional-534748" context setting]
	I1210 06:31:45.117177  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.117949  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.118213  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.118984  830558 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:31:45.119085  830558 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:31:45.119134  830558 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:31:45.119161  830558 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:31:45.119217  830558 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:31:45.119055  830558 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:31:45.119702  830558 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:31:45.137495  830558 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:31:45.137534  830558 kubeadm.go:602] duration metric: took 36.034287ms to restartPrimaryControlPlane
	I1210 06:31:45.137546  830558 kubeadm.go:403] duration metric: took 151.054854ms to StartCluster
	I1210 06:31:45.137576  830558 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.137653  830558 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.138311  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.138643  830558 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:31:45.139043  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:45.139108  830558 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:31:45.139177  830558 addons.go:70] Setting storage-provisioner=true in profile "functional-534748"
	I1210 06:31:45.139193  830558 addons.go:239] Setting addon storage-provisioner=true in "functional-534748"
	I1210 06:31:45.139221  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.139239  830558 addons.go:70] Setting default-storageclass=true in profile "functional-534748"
	I1210 06:31:45.139259  830558 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-534748"
	I1210 06:31:45.139583  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.139701  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.145574  830558 out.go:179] * Verifying Kubernetes components...
	I1210 06:31:45.148690  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:45.190248  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.190435  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.190756  830558 addons.go:239] Setting addon default-storageclass=true in "functional-534748"
	I1210 06:31:45.190791  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.192137  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.207281  830558 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:31:45.210256  830558 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.210285  830558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:31:45.210364  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.229978  830558 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:45.230080  830558 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:31:45.230235  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.286606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.319378  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.390267  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:45.420552  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.445487  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.049742  830558 node_ready.go:35] waiting up to 6m0s for node "functional-534748" to be "Ready" ...
	I1210 06:31:46.049893  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.049953  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.050234  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050272  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050293  830558 retry.go:31] will retry after 223.621304ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050345  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050359  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050366  830558 retry.go:31] will retry after 336.04204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050483  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.274791  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.331904  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.335903  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.335940  830558 retry.go:31] will retry after 342.637774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.387178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.449259  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.449297  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.449332  830558 retry.go:31] will retry after 384.971387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.550591  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.550669  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.551072  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.679392  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.735005  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.738824  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.738907  830558 retry.go:31] will retry after 477.156435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.835016  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.898535  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.902447  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.902505  830558 retry.go:31] will retry after 587.076477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.050787  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.051147  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.216664  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:47.275932  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.275982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.276003  830558 retry.go:31] will retry after 1.079016213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.490360  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:47.550012  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.551946  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.551982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.552018  830558 retry.go:31] will retry after 1.089774327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.050900  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.051018  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.051381  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.051446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.355639  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:48.413382  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.416787  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.416855  830558 retry.go:31] will retry after 1.248652089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.642762  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:48.712914  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.712955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.712975  830558 retry.go:31] will retry after 929.620731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.050356  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.050675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.550083  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.643743  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:49.666178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:49.715961  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.724279  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.724309  830558 retry.go:31] will retry after 2.037720794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735770  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.735805  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735824  830558 retry.go:31] will retry after 1.943919735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:50.050051  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.050130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.550100  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.550171  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.050020  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.050456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.550105  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.550181  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.550525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.680862  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:51.745585  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.745620  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.745639  830558 retry.go:31] will retry after 2.112684099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.762814  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:51.821569  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.825567  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.825603  830558 retry.go:31] will retry after 2.699110245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:52.050957  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.051054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.051439  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.550045  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.050176  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.050253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.050635  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:53.050697  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.550816  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.551250  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.858630  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:53.918073  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:53.921869  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:53.921905  830558 retry.go:31] will retry after 2.635687612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.525086  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:54.550579  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.550656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.550932  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.585338  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:54.588955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.588990  830558 retry.go:31] will retry after 2.164216453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:55.050098  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.551055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.551113  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:56.050733  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.050815  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.051188  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.549910  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.550302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.558696  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:56.634154  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.634201  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.634222  830558 retry.go:31] will retry after 5.842380515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.753466  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:56.822332  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.822371  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.822391  830558 retry.go:31] will retry after 4.388036914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:57.050861  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.050942  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.051261  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.550079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.049946  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.050302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:58.050362  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:58.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.550513  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.050184  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.050262  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.050626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.550077  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:00.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.550903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.551281  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.050843  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.051196  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.210631  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:01.270135  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:01.273736  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.273765  830558 retry.go:31] will retry after 7.330909522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.550049  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.050246  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.050347  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.050768  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.477366  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:02.540275  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:02.540316  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.540336  830558 retry.go:31] will retry after 13.941322707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.050685  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.050764  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.051097  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.550804  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.550886  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.551211  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.051225  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:04.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.050150  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.050229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.050552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.550574  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.550641  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.550922  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.050749  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.050829  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.051208  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:06.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:06.549940  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.050725  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.050985  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.550782  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.550862  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.551221  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.051376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:08.051435  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:08.550082  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.605823  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:08.661807  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:08.666022  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:08.666054  830558 retry.go:31] will retry after 18.459732711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:09.050632  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.050712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.051043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.550857  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.050543  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.050622  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.050913  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.550123  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.550201  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.550566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:10.550627  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:11.050158  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.050241  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.550370  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.050064  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.550550  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.050834  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.050904  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:13.051271  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:13.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.050138  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.050575  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.550278  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.550375  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.550721  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.050080  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.050169  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.050590  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.550609  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.550687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.551021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:15.551080  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:16.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.050991  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.482787  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:16.542663  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:16.546278  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.546307  830558 retry.go:31] will retry after 7.242230365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.550430  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.550511  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.550807  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.051138  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.550461  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.550553  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.550825  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.050619  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.050699  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.051034  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:18.051091  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:18.550728  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.550817  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.551143  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.050890  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.050958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.051259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.550945  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.551021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.551375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.050449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.549971  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.550340  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:20.550389  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:21.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.550111  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.550187  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.050899  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.050974  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.051306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.550116  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.550195  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.550553  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:22.550614  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:23.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.050118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.050459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.550009  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.550297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.788809  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:23.847955  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:23.851833  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:23.851867  830558 retry.go:31] will retry after 12.516286884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:24.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.050142  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.550248  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.550322  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.550678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.550736  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:25.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.050546  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.550682  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.550758  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.551068  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.050934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.051011  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.051351  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.549946  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.050019  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.050429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:27.050507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:27.126908  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:27.191358  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:27.191398  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.191417  830558 retry.go:31] will retry after 11.065094951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.550078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.050147  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.050242  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.550207  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.550541  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.050535  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:29.050590  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.550933  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.551212  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.550493  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.050667  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.050939  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.050993  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.550742  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.550827  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.551169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.050826  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.050910  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.051237  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.549938  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.550264  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.050091  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.550173  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.550258  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.550581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.550638  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:34.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.050330  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.550070  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.550540  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.050253  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.050340  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.550817  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.551259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.551320  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.049997  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.050415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.369119  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:36.431728  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:36.431764  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.431783  830558 retry.go:31] will retry after 39.090862924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.549963  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.550375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.050652  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.050724  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.050986  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.550839  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.551209  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.049961  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:38.050446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.256706  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:38.315606  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:38.315652  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.315671  830558 retry.go:31] will retry after 24.874249468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.550037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.550353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.050035  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.550165  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.550611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.050932  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.051412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.051484  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.550007  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.550092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.050151  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.050226  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.050542  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.549934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.550007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.050083  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.050160  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.550115  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.550557  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.550613  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.050266  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.050343  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.050403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.549913  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.050255  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.050774  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:45.050854  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.550027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.050187  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.050264  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.050652  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.550359  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.550435  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.550733  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.550791  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:48.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.050612  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.050950  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.550625  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.550703  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.551027  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.050305  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.050380  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.050293  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.050654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:50.050715  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.550658  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.550732  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.550987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.050776  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.051172  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.549919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.549999  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.550341  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.050371  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.550001  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.550075  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:53.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.050100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.550167  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.550226  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.550303  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.550719  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:55.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.050343  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.550553  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.550627  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.550930  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.050724  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.050807  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.550490  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.550765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.550815  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.050617  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.050698  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.051032  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.550880  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.551319  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.050503  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.050584  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.050859  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.550636  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.550712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:58.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.050796  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.051120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.550919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.551267  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.052318  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:33:00.550554  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.550633  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.550978  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.050351  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.050633  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.050680  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:01.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.050197  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.050277  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.050651  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.550347  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.050076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.190859  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:33:03.248648  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248694  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248794  830558 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:03.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.550454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.050739  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.050814  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.051133  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.550977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.551052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.551392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.050105  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.050184  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.050531  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.550528  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.550787  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.550829  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.050557  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.050961  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.550801  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.550879  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.049908  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.050285  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.550098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.050180  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.050261  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.050656  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.050717  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.549966  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.550358  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.550043  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.550501  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.050401  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.550597  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.550682  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.551012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.551066  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:11.050806  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.050883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.051219  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.550460  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.550568  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.550827  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.050716  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.550879  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.550959  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.551385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.551442  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:13.049924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.050301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.549989  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.550389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.050083  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.050417  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.550127  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.550484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.050632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:15.050702  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.522803  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:33:15.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.550344  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.583628  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587769  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587875  830558 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:15.590972  830558 out.go:179] * Enabled addons: 
	I1210 06:33:15.594685  830558 addons.go:530] duration metric: took 1m30.455573868s for enable addons: enabled=[]
	I1210 06:33:16.049998  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.050410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.050382  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.549964  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.550065  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:17.550413  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:18.050065  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.050504  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.550271  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.550617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.050795  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.050864  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.051173  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.550924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.551041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.551366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.551422  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:20.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.050041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.550354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.050040  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.050115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.049927  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.049998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:22.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:22.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.050681  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.549948  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.550276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:24.050460  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:24.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.550552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.550502  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.550576  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.550881  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.050647  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.050720  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.051065  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:26.051131  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:26.550815  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.550883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.551145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.049919  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.050002  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.050335  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.550459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.050846  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.051128  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:28.051173  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:28.550887  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.551314  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.050094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.050428  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.549962  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.550045  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.550327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.050611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.550611  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.550706  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.551062  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:30.551116  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:31.050373  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.050446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.050762  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.550642  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.550963  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.050761  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.050841  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.051145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.550438  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.550527  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.550836  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.050606  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.050687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.051001  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.051058  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:33.550797  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.550872  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.551204  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.050446  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.050542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.050806  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.550570  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.550651  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.551007  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.050684  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.050765  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.051121  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.051180  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:35.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.550049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.050068  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.050156  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.050551  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.550267  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.550341  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.050415  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.050506  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.050765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.550162  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:37.550551  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.050049  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.050196  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.050593  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.550283  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.550352  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.550637  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.550093  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.550174  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.550524  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:39.550606  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.050048  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.055554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:33:40.550566  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.550648  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.551812  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:33:41.050589  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.051002  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.550775  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.550850  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.551122  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:41.551174  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:42.050929  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.051003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.051301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.550943  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.551032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.551344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.049952  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.550011  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.550090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.050208  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.050291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.050657  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:44.050712  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:44.549928  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.550272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.050538  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.550260  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.550359  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.051019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.051104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.051470  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:46.051522  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:46.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.550441  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.050177  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.050256  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.050580  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.550565  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.550895  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.050718  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.050799  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.051139  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.550959  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.551034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.551396  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:48.551454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:49.049969  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.550097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.550429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.050016  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.050484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.550304  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.049996  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.050078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:51.050452  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:51.550024  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.550445  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.049971  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.050360  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.050013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.050485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:53.050541  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.050022  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.050106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.550641  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.050327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.550478  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.550556  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:55.550991  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:56.050594  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.050672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.550810  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.550888  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.551156  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.050906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.050979  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.051317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.049906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.049976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.050249  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.050294  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:58.549945  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.550024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.550385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.050095  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.050176  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.050522  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.550222  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.550309  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.050052  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.050455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:00.050684  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:00.549926  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.550006  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.550355  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.050662  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.050737  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.051064  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.550884  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.551306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.050041  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.550268  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.550561  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:02.550618  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.050297  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.050373  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.050719  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.049984  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.550075  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.550154  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.550510  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.050122  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.050591  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.050642  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.550387  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.550492  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.050542  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.050966  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.550619  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.551056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.050145  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.050214  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.050555  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.550443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.550518  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.050047  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.050151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.050544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.550038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.550495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:09.550556  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.050581  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.050987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.550954  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.050073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.550577  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.550654  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.550920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:11.550968  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:12.050759  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:12.549950  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.550032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.550372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.050891  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.051155  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.550910  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.550990  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.551324  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:13.551384  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:14.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.050372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:14.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.550132  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.550454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.050143  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.550590  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.550665  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.551006  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:16.050139  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.050219  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:16.050651  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:16.550343  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.550746  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.050583  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.050659  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.051004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.550305  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.550379  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.550661  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.050492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.550227  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.550654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:18.550708  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:19.049907  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.050300  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:19.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.050129  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.050682  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.550512  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.550605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.550929  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:20.550983  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:21.050722  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.050804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.051141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:21.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.551258  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.050508  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.050581  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.550614  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.550689  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.551037  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:22.551097  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:23.050847  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.050935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:23.549922  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.050066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.050419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.550230  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.550613  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:25.050894  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:25.051280  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:25.550372  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.550449  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.050683  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.050763  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.051110  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.550564  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.550636  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.550899  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.050671  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.050748  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.051102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.550781  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.550860  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.551195  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:27.551252  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:28.049904  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.049986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.050254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:28.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.050220  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.050298  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.050678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.549921  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.549996  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:30.050073  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:30.050563  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:30.550516  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.550620  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.050272  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.050339  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.050673  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:32.050170  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.050245  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:32.050647  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:32.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.550386  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.550677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.050375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.550519  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:34.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.050710  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.051024  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:34.051085  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:34.550840  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.050092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.550503  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.550574  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.550888  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:36.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.050822  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:36.051321  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:36.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.551056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.551466  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.050811  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.050107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.550034  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.550118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.550387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:38.550431  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:39.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:39.550021  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.050212  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.050299  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.050616  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.550800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.551131  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:40.551184  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:41.050959  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.051050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.051405  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:41.550069  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.550140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.050053  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.050128  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.550426  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:43.049964  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:43.050427  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:43.550060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.550432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.050174  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.050254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.050577  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.550265  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.550337  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:45.050106  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.051475  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:34:45.051555  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:45.550586  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.550670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.050308  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.050387  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.050713  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.550668  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.551031  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.050814  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.051189  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.550459  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.550844  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:47.550902  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:48.050660  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.050735  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.051052  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:48.550831  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.050342  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.050418  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.050723  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.550042  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.550119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.550450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.050296  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:50.050747  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:50.550446  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.550803  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.050575  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.050992  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.550764  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.550839  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.551183  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:52.050947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.051295  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:52.051339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:52.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.550102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.550487  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.050304  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.050648  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.550369  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.050479  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.550177  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.550254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:54.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:55.049960  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.050038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.050307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:55.550536  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.550618  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.550953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.050845  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.549892  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.549977  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.550245  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:57.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:57.050439  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:57.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.550412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.550038  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.550398  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:59.050037  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:59.050536  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:59.550090  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.550165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.550488  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.050082  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.050172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.050532  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.550871  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.551043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.551414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.050056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.550506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:01.550566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:02.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.050334  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.050718  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:02.549994  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.550338  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.550201  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.550618  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:03.550677  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:04.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.050326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:04.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.550073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.550366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.050435  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.550487  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:05.550797  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:06.050578  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.051028  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:06.550698  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.550789  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.551170  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.050527  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.050605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.550670  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.551130  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:07.551186  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:08.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.050023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.050388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:08.550709  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.550783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.551109  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.051017  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.051361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.550147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.550539  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:10.049990  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.050353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:10.050409  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:10.550333  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.550412  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.550769  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.050573  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.050649  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.050998  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.550348  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.550636  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:12.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:12.050544  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:12.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.550407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.050003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.050262  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.550020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.550364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.049948  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.550069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.550374  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:14.550430  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:15.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:15.550549  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.550643  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.550979  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.050330  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.050628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.550008  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.550088  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:16.550501  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:17.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.050312  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.050693  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:17.549908  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.549986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.550246  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.050001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.050297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.550063  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.550458  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:18.550526  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:19.050194  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.050560  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:19.550268  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.550350  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.050392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.050488  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.050847  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:20.551047  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:21.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.050894  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:21.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.551007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.551349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.050275  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:23.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.050584  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:23.050648  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:23.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.550376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.550716  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.050410  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.050504  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.050842  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.550612  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:25.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.050728  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.051015  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:25.051074  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:25.550298  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.550378  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.050574  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.050656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.051021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.550326  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.550392  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.550033  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.550485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:27.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:28.050833  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.051180  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:28.550989  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.551079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.551403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.050086  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.050165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.550827  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.551182  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:29.551227  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:30.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.050563  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:30.550365  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.550440  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.550785  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.050058  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.050147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.050461  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:32.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.050406  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:32.050456  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:32.549918  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.549989  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.550312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.050009  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.050085  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.550628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:34.050314  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.050390  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.050677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:34.050723  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:34.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.550685  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.051309  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.550125  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.550193  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.550209  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.550544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:36.550600  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:37.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.050376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:37.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.550172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.550549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.550505  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.550588  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.550849  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:38.550901  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:39.050640  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.050721  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.051071  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:39.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.550926  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.050625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.050933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.551010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.551608  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:40.551663  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:41.049981  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.050352  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:41.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.550361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.550359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:43.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.050413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:43.050491  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:43.550149  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.550232  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.550536  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.050209  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.050286  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.550377  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.550446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.550724  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:45.050153  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:45.050650  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:45.549952  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.550034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.550414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:46.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.050372  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.055238  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:35:46.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.550675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:47.050432  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.050548  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.050914  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:47.050975  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:47.550717  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.551174  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.049903  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.049980  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.050317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.550065  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.550558  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:49.050850  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.050920  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.051255  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:49.051361  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:49.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.050183  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.050684  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.550583  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.550655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.550936  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.050800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.051144  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.551356  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:51.551411  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:52.050680  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.050756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.051067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:52.550548  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.550625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.050711  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.051146  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.550886  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.551220  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:54.049953  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.050414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:54.050492  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:54.549996  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.051106  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.550346  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.550419  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.550782  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:56.050628  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.051118  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:56.051182  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:56.550940  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.551022  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.551289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.049999  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.550492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.550015  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:58.550507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:59.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.050407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:59.550696  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.550768  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.551102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.050842  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.050924  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.051234  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:01.049954  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.050035  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.050328  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:01.050375  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:01.549990  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.050117  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.050573  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.550013  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.550284  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:03.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.050086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:03.050527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:03.550184  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.550270  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.550632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.050901  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.050978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.051312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.550082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.550542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:05.550839  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:06.050579  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.051012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:06.550829  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.551240  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.050493  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.050573  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.550693  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.550778  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.551124  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:07.551183  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:08.050922  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.051004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.051346  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:08.549944  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.550015  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.550288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.550052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:10.050587  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.050953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:10.051003  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:10.550899  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.550976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.551312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.051047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.051365  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.550062  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.550380  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.050424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.550175  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.550251  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:12.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:13.049890  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.049962  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.050215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:13.549891  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.549970  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.550296  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.050411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.550126  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.550211  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.550507  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:15.050062  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:15.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:15.550556  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.550635  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.050861  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.051148  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.550930  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.551326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.550156  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.550229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.550520  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:17.550565  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:18.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:18.550193  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.049970  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.050368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.550014  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:20.050206  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.050292  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.050696  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:20.050759  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.550733  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.050835  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.050133  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.550116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:22.550527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:23.050001  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.050430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:23.549960  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.050045  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.550232  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.550319  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:24.550726  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:25.049975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.050347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:25.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.550531  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.550872  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.050576  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.050655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.051009  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.550798  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.551067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:26.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:27.050878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.050952  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.051289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:27.550017  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.049942  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.050024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.050288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:29.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.050234  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.050566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:29.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:29.550905  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.550972  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.050116  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.050204  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.550551  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.550956  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:31.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.050353  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.050643  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:31.050689  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:31.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.050146  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.050220  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.050568  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.550834  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.550909  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.551181  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.049926  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.050020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:33.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:34.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.050221  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:34.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.550113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.550403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.050133  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.550293  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.550366  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.550646  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:35.550688  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:36.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:36.550078  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.550152  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.550514  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.050153  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.550003  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.550086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:38.050242  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.050345  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.050820  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:38.050886  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:38.550627  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.550702  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.550965  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.050786  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.051199  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.550826  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.550908  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.551239  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.049947  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.050037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.050342  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.550382  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.550458  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.550826  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:40.550883  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:41.050667  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.050745  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.051117  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:41.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.550958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.050917  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.050997  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.051354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.550117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.550436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:43.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:43.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:43.549987  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.050905  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.051231  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.550482  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.550555  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.550855  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:45.050825  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.050916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.051222  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:45.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:45.550929  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.551345  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.049915  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.050010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.050329  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.549983  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.549925  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:47.550317  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:48.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.050095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:48.550037  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.050116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.050497  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.550104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.550496  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:49.550554  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:50.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.050125  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.050500  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:50.550519  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.550589  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.050803  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.550907  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.550985  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:51.551347  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:52.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.050305  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:52.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.550070  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.550649  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:54.050845  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.050929  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.051278  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:54.051340  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:54.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.550067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.550384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.050384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.550672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.550984  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.050875  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.050955  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.051282  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:56.550406  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:57.050072  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.050499  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:57.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.550054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.049963  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.550064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:58.550486  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:59.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.050244  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.050617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:59.550004  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.550332  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.050088  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.050180  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.050543  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.550848  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.550935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.551280  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:00.551339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:01.050564  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.050644  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.050904  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:01.550685  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.551120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.051039  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.051359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.550512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:03.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:03.050509  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:03.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.550095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.050664  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.050742  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.051055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.550863  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.551272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.049983  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.050389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.550411  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.550500  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.550764  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:05.550808  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:06.050441  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.050533  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.050866  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:06.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.551104  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.050870  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.050944  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.051251  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.550410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:08.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.050239  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.050601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:08.050664  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:08.549949  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.550357  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.550204  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.550291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.550711  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:10.050422  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.050521  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:10.050899  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:10.550710  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.550785  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.551141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.050942  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.051363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.050103  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.550253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.550680  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:12.550735  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:13.049956  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.050028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:13.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.550413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.050614  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.550902  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.551307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:14.551376  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:15.050054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.050140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.050549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:15.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.550756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.551093  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.050865  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.050946  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.051228  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.550004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.550336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:17.049921  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.050336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:17.050393  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:17.550046  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.550394  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.050366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.550054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.550489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:19.050104  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.050185  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.050515  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:19.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:19.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.550665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.050424  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.050518  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.050884  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.550762  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.551162  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:21.050936  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.051012  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.051344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:21.051398  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:21.550076  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.550149  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.550491  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.050770  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.051151  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.550952  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.551036  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.551372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.050110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.550623  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.551091  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:23.551140  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:24.050873  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.050957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.051303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:24.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.050726  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.050795  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.051103  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:26.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.050517  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:26.050574  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:26.550022  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.550096  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.550377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.050089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.550158  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.550236  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.550601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.550041  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.550120  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.550456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:28.550530  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:29.050006  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.050087  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.050404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:29.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.050099  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.050189  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.550673  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.551134  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:30.551190  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:31.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.050960  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.051274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:31.550000  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.550131  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.550681  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.550771  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.551081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:33.050860  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.050934  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:33.051305  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:33.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.550448  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.050046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.550051  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.550376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.550550  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.550892  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:35.550953  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:36.050690  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.050767  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.051081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:36.550920  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.551001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.551377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.050702  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.050783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.051058  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.550812  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.550889  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:37.551281  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:38.049987  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:38.550706  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.550780  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.551043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.050813  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.050899  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.051232  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.549927  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.550005  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.550337  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:40.050658  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.051035  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:40.051084  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:40.549980  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.550072  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.550505  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.050097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.550897  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:42.050745  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.050826  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:42.051228  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:42.550950  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.551348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.050646  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.050920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.550724  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.550804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.551126  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:44.050930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.051007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.051348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:44.051402  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:44.550445  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.550537  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.550795  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.050730  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.051044  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.550527  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.550601  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.550931  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:46.050582  830558 type.go:168] "Request Body" body=""
	I1210 06:37:46.050725  830558 node_ready.go:38] duration metric: took 6m0.000935284s for node "functional-534748" to be "Ready" ...
	I1210 06:37:46.053848  830558 out.go:203] 
	W1210 06:37:46.056787  830558 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:37:46.056817  830558 out.go:285] * 
	W1210 06:37:46.059108  830558 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:37:46.062914  830558 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:37:53 functional-534748 containerd[5224]: time="2025-12-10T06:37:53.520352544Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:54 functional-534748 containerd[5224]: time="2025-12-10T06:37:54.565252348Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 10 06:37:54 functional-534748 containerd[5224]: time="2025-12-10T06:37:54.567605851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 10 06:37:54 functional-534748 containerd[5224]: time="2025-12-10T06:37:54.574773209Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:54 functional-534748 containerd[5224]: time="2025-12-10T06:37:54.575228705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:55 functional-534748 containerd[5224]: time="2025-12-10T06:37:55.553226651Z" level=info msg="No images store for sha256:0c729ebacec82a4a862e39f331b1dc02cab7e87861cddd7a8db1fd64af001e55"
	Dec 10 06:37:55 functional-534748 containerd[5224]: time="2025-12-10T06:37:55.555377928Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-534748\""
	Dec 10 06:37:55 functional-534748 containerd[5224]: time="2025-12-10T06:37:55.563505518Z" level=info msg="ImageCreate event name:\"sha256:54106a51504f7a89ca38a9b17f1e7c790a91bdd52bce5badc4621cab1917817f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:55 functional-534748 containerd[5224]: time="2025-12-10T06:37:55.563948461Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:56 functional-534748 containerd[5224]: time="2025-12-10T06:37:56.365118911Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 10 06:37:56 functional-534748 containerd[5224]: time="2025-12-10T06:37:56.367504054Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 10 06:37:56 functional-534748 containerd[5224]: time="2025-12-10T06:37:56.369566928Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 10 06:37:56 functional-534748 containerd[5224]: time="2025-12-10T06:37:56.381672073Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.262809286Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.265411703Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.267292798Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.275537420Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.410388053Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.412560664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.420398890Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.420731070Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.594360566Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.596511088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.604043292Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.604379230Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:59.311757    9189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:59.312525    9189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:59.313417    9189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:59.314949    9189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:59.315482    9189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:37:59 up  5:20,  0 user,  load average: 0.52, 0.32, 0.78
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:37:55 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:56 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 10 06:37:56 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:56 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:56 functional-534748 kubelet[8942]: E1210 06:37:56.606161    8942 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:56 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:56 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:57 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 10 06:37:57 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:57 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:57 functional-534748 kubelet[9034]: E1210 06:37:57.356874    9034 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:57 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:57 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:58 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 10 06:37:58 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:58 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:58 functional-534748 kubelet[9088]: E1210 06:37:58.108429    9088 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:58 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:58 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:58 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 10 06:37:58 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:58 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:58 functional-534748 kubelet[9108]: E1210 06:37:58.859587    9108 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:58 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:58 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (488.024279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-534748 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-534748 get pods: exit status 1 (114.905048ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-534748 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (313.203868ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-534748 logs -n 25: (1.004302161s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-634209 image ls --format short --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls --format yaml --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ functional-634209 ssh pgrep buildkitd                                                                                                                   │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ image   │ functional-634209 image ls --format json --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr                                                  │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls --format table --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls                                                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p functional-634209                                                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ start   │ -p functional-534748 --alsologtostderr -v=8                                                                                                             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:31 UTC │                     │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:latest                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add minikube-local-cache-test:functional-534748                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache delete minikube-local-cache-test:functional-534748                                                                              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl images                                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	│ cache   │ functional-534748 cache reload                                                                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ kubectl │ functional-534748 kubectl -- --context functional-534748 get pods                                                                                       │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:31:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:31:40.279311  830558 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:31:40.279505  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279534  830558 out.go:374] Setting ErrFile to fd 2...
	I1210 06:31:40.279556  830558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:31:40.279849  830558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:31:40.280242  830558 out.go:368] Setting JSON to false
	I1210 06:31:40.281164  830558 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18825,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:31:40.281259  830558 start.go:143] virtualization:  
	I1210 06:31:40.284710  830558 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:31:40.288411  830558 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:31:40.288473  830558 notify.go:221] Checking for updates...
	I1210 06:31:40.295121  830558 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:31:40.302607  830558 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:40.305522  830558 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:31:40.308355  830558 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:31:40.311698  830558 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:31:40.315095  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:40.315199  830558 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:31:40.353797  830558 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:31:40.353929  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.415859  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.405265704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.415979  830558 docker.go:319] overlay module found
	I1210 06:31:40.419085  830558 out.go:179] * Using the docker driver based on existing profile
	I1210 06:31:40.421970  830558 start.go:309] selected driver: docker
	I1210 06:31:40.421991  830558 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.422101  830558 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:31:40.422196  830558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:31:40.479216  830558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:31:40.46865578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:31:40.479663  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:40.479723  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:40.479768  830558 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:40.482983  830558 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:31:40.485814  830558 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:31:40.488782  830558 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:31:40.491625  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:40.491676  830558 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:31:40.491687  830558 cache.go:65] Caching tarball of preloaded images
	I1210 06:31:40.491736  830558 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:31:40.491792  830558 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:31:40.491804  830558 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:31:40.491917  830558 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:31:40.511808  830558 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:31:40.511830  830558 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:31:40.511847  830558 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:31:40.511881  830558 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:31:40.511943  830558 start.go:364] duration metric: took 39.41µs to acquireMachinesLock for "functional-534748"
	I1210 06:31:40.511975  830558 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:31:40.511985  830558 fix.go:54] fixHost starting: 
	I1210 06:31:40.512241  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:40.529256  830558 fix.go:112] recreateIfNeeded on functional-534748: state=Running err=<nil>
	W1210 06:31:40.529298  830558 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:31:40.532448  830558 out.go:252] * Updating the running docker "functional-534748" container ...
	I1210 06:31:40.532488  830558 machine.go:94] provisionDockerMachine start ...
	I1210 06:31:40.532584  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.550188  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.550543  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.550560  830558 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:31:40.681995  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.682020  830558 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:31:40.682096  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.699737  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.700054  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.700072  830558 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:31:40.843977  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:31:40.844083  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:40.862627  830558 main.go:143] libmachine: Using SSH client type: native
	I1210 06:31:40.862951  830558 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:31:40.862975  830558 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:31:40.999052  830558 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:31:40.999087  830558 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:31:40.999116  830558 ubuntu.go:190] setting up certificates
	I1210 06:31:40.999127  830558 provision.go:84] configureAuth start
	I1210 06:31:40.999208  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.018099  830558 provision.go:143] copyHostCerts
	I1210 06:31:41.018148  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018188  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:31:41.018200  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:31:41.018276  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:31:41.018376  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018397  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:31:41.018412  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:31:41.018442  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:31:41.018539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018565  830558 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:31:41.018570  830558 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:31:41.018598  830558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:31:41.018664  830558 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:31:41.416959  830558 provision.go:177] copyRemoteCerts
	I1210 06:31:41.417039  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:31:41.417085  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.434643  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.530263  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:31:41.530324  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:31:41.547539  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:31:41.547601  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:31:41.565054  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:31:41.565115  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:31:41.582586  830558 provision.go:87] duration metric: took 583.43959ms to configureAuth
	I1210 06:31:41.582635  830558 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:31:41.582823  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:41.582837  830558 machine.go:97] duration metric: took 1.050342086s to provisionDockerMachine
	I1210 06:31:41.582845  830558 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:31:41.582857  830558 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:31:41.582912  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:31:41.582957  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.603404  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.698354  830558 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:31:41.701779  830558 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:31:41.701843  830558 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:31:41.701865  830558 command_runner.go:130] > VERSION_ID="12"
	I1210 06:31:41.701877  830558 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:31:41.701883  830558 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:31:41.701887  830558 command_runner.go:130] > ID=debian
	I1210 06:31:41.701891  830558 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:31:41.701896  830558 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:31:41.701906  830558 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:31:41.701968  830558 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:31:41.702000  830558 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:31:41.702014  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:31:41.702084  830558 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:31:41.702172  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:31:41.702185  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /etc/ssl/certs/7867512.pem
	I1210 06:31:41.702261  830558 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:31:41.702269  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> /etc/test/nested/copy/786751/hosts
	I1210 06:31:41.702315  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:31:41.709991  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:41.727898  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:31:41.745651  830558 start.go:296] duration metric: took 162.79042ms for postStartSetup
	I1210 06:31:41.745798  830558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:31:41.745866  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.763287  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.863262  830558 command_runner.go:130] > 19%
	I1210 06:31:41.863843  830558 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:31:41.868394  830558 command_runner.go:130] > 159G
	I1210 06:31:41.868719  830558 fix.go:56] duration metric: took 1.356728705s for fixHost
	I1210 06:31:41.868739  830558 start.go:83] releasing machines lock for "functional-534748", held for 1.35678464s
	I1210 06:31:41.868810  830558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:31:41.887031  830558 ssh_runner.go:195] Run: cat /version.json
	I1210 06:31:41.887084  830558 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:31:41.887092  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.887143  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:41.906606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:41.920523  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:42.095537  830558 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:31:42.095667  830558 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 06:31:42.095846  830558 ssh_runner.go:195] Run: systemctl --version
	I1210 06:31:42.103080  830558 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:31:42.103120  830558 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:31:42.103532  830558 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:31:42.109223  830558 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:31:42.109308  830558 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:31:42.109410  830558 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:31:42.119226  830558 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:31:42.119255  830558 start.go:496] detecting cgroup driver to use...
	I1210 06:31:42.119293  830558 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:31:42.119365  830558 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:31:42.140472  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:31:42.156795  830558 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:31:42.156872  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:31:42.175919  830558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:31:42.191679  830558 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:31:42.319538  830558 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:31:42.438460  830558 docker.go:234] disabling docker service ...
	I1210 06:31:42.438580  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:31:42.456224  830558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:31:42.471442  830558 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:31:42.599250  830558 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:31:42.716867  830558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:31:42.729172  830558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:31:42.742342  830558 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 06:31:42.743581  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:31:42.752861  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:31:42.762203  830558 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:31:42.762278  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:31:42.771751  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.780168  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:31:42.788652  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:31:42.797230  830558 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:31:42.805633  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:31:42.814368  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:31:42.823074  830558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:31:42.832256  830558 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:31:42.839109  830558 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:31:42.840076  830558 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:31:42.847676  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:42.968893  830558 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:31:43.099901  830558 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:31:43.099974  830558 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:31:43.103852  830558 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 06:31:43.103874  830558 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:31:43.103881  830558 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1210 06:31:43.103888  830558 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:43.103903  830558 command_runner.go:130] > Access: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103913  830558 command_runner.go:130] > Modify: 2025-12-10 06:31:43.050872938 +0000
	I1210 06:31:43.103919  830558 command_runner.go:130] > Change: 2025-12-10 06:31:43.062873060 +0000
	I1210 06:31:43.103925  830558 command_runner.go:130] >  Birth: -
	I1210 06:31:43.103951  830558 start.go:564] Will wait 60s for crictl version
	I1210 06:31:43.104009  830558 ssh_runner.go:195] Run: which crictl
	I1210 06:31:43.107381  830558 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:31:43.107477  830558 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:31:43.129358  830558 command_runner.go:130] > Version:  0.1.0
	I1210 06:31:43.129383  830558 command_runner.go:130] > RuntimeName:  containerd
	I1210 06:31:43.129392  830558 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 06:31:43.129396  830558 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:31:43.131610  830558 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:31:43.131682  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.151833  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.153818  830558 ssh_runner.go:195] Run: containerd --version
	I1210 06:31:43.172831  830558 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 06:31:43.180465  830558 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:31:43.183314  830558 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:31:43.199081  830558 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:31:43.202971  830558 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:31:43.203147  830558 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:31:43.203272  830558 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:31:43.203351  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.227955  830558 command_runner.go:130] > {
	I1210 06:31:43.227978  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.227982  830558 command_runner.go:130] >     {
	I1210 06:31:43.227991  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.227996  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228002  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.228005  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228009  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228020  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.228023  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228028  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.228032  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228036  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228040  830558 command_runner.go:130] >     },
	I1210 06:31:43.228044  830558 command_runner.go:130] >     {
	I1210 06:31:43.228052  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.228056  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228061  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.228066  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228082  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228094  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.228097  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228102  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.228108  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228112  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228117  830558 command_runner.go:130] >     },
	I1210 06:31:43.228121  830558 command_runner.go:130] >     {
	I1210 06:31:43.228128  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.228135  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228141  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.228153  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228160  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228168  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.228174  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228178  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.228182  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.228186  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228191  830558 command_runner.go:130] >     },
	I1210 06:31:43.228195  830558 command_runner.go:130] >     {
	I1210 06:31:43.228204  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.228208  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228215  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.228219  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228225  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228233  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.228239  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228243  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.228247  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228250  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228254  830558 command_runner.go:130] >       },
	I1210 06:31:43.228258  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228264  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228272  830558 command_runner.go:130] >     },
	I1210 06:31:43.228279  830558 command_runner.go:130] >     {
	I1210 06:31:43.228286  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.228290  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228295  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.228299  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228303  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228313  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.228317  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228321  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.228331  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228340  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228350  830558 command_runner.go:130] >       },
	I1210 06:31:43.228354  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228357  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228361  830558 command_runner.go:130] >     },
	I1210 06:31:43.228364  830558 command_runner.go:130] >     {
	I1210 06:31:43.228371  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.228384  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228390  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.228394  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228398  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228406  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.228412  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228416  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.228420  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228424  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228427  830558 command_runner.go:130] >       },
	I1210 06:31:43.228438  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228443  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228445  830558 command_runner.go:130] >     },
	I1210 06:31:43.228448  830558 command_runner.go:130] >     {
	I1210 06:31:43.228455  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.228463  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228471  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.228475  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228479  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228487  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.228493  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228497  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.228502  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228512  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228515  830558 command_runner.go:130] >     },
	I1210 06:31:43.228518  830558 command_runner.go:130] >     {
	I1210 06:31:43.228525  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.228530  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228538  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.228542  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228546  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228557  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.228566  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228573  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.228577  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228580  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.228584  830558 command_runner.go:130] >       },
	I1210 06:31:43.228594  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228598  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.228601  830558 command_runner.go:130] >     },
	I1210 06:31:43.228604  830558 command_runner.go:130] >     {
	I1210 06:31:43.228611  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.228617  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.228621  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.228627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228631  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.228641  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.228647  830558 command_runner.go:130] >       ],
	I1210 06:31:43.228655  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.228659  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.228669  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.228673  830558 command_runner.go:130] >       },
	I1210 06:31:43.228677  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.228681  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.228686  830558 command_runner.go:130] >     }
	I1210 06:31:43.228689  830558 command_runner.go:130] >   ]
	I1210 06:31:43.228692  830558 command_runner.go:130] > }
	I1210 06:31:43.228843  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.228853  830558 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:31:43.228913  830558 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:31:43.254390  830558 command_runner.go:130] > {
	I1210 06:31:43.254411  830558 command_runner.go:130] >   "images":  [
	I1210 06:31:43.254415  830558 command_runner.go:130] >     {
	I1210 06:31:43.254424  830558 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 06:31:43.254430  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254435  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 06:31:43.254440  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254444  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254453  830558 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1210 06:31:43.254460  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254488  830558 command_runner.go:130] >       "size":  "40636774",
	I1210 06:31:43.254495  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254499  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254508  830558 command_runner.go:130] >     },
	I1210 06:31:43.254512  830558 command_runner.go:130] >     {
	I1210 06:31:43.254527  830558 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 06:31:43.254534  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254540  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:31:43.254543  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254547  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254556  830558 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 06:31:43.254576  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254581  830558 command_runner.go:130] >       "size":  "8034419",
	I1210 06:31:43.254585  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254589  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254600  830558 command_runner.go:130] >     },
	I1210 06:31:43.254603  830558 command_runner.go:130] >     {
	I1210 06:31:43.254609  830558 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:31:43.254619  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254624  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:31:43.254627  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254638  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254649  830558 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1210 06:31:43.254661  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254665  830558 command_runner.go:130] >       "size":  "21168808",
	I1210 06:31:43.254669  830558 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:31:43.254673  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254677  830558 command_runner.go:130] >     },
	I1210 06:31:43.254680  830558 command_runner.go:130] >     {
	I1210 06:31:43.254694  830558 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 06:31:43.254698  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254703  830558 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 06:31:43.254706  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254710  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254721  830558 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1210 06:31:43.254725  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254729  830558 command_runner.go:130] >       "size":  "21136588",
	I1210 06:31:43.254735  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254739  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254744  830558 command_runner.go:130] >       },
	I1210 06:31:43.254749  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254753  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254765  830558 command_runner.go:130] >     },
	I1210 06:31:43.254768  830558 command_runner.go:130] >     {
	I1210 06:31:43.254779  830558 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 06:31:43.254786  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254791  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 06:31:43.254795  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254798  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254806  830558 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1210 06:31:43.254810  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254816  830558 command_runner.go:130] >       "size":  "24678359",
	I1210 06:31:43.254820  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254831  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254835  830558 command_runner.go:130] >       },
	I1210 06:31:43.254843  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254850  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254853  830558 command_runner.go:130] >     },
	I1210 06:31:43.254860  830558 command_runner.go:130] >     {
	I1210 06:31:43.254867  830558 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 06:31:43.254873  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254879  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 06:31:43.254882  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254886  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254894  830558 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1210 06:31:43.254897  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254901  830558 command_runner.go:130] >       "size":  "20661043",
	I1210 06:31:43.254907  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.254911  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.254916  830558 command_runner.go:130] >       },
	I1210 06:31:43.254920  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254926  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254929  830558 command_runner.go:130] >     },
	I1210 06:31:43.254932  830558 command_runner.go:130] >     {
	I1210 06:31:43.254939  830558 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 06:31:43.254945  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.254951  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 06:31:43.254958  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254962  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.254970  830558 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 06:31:43.254975  830558 command_runner.go:130] >       ],
	I1210 06:31:43.254979  830558 command_runner.go:130] >       "size":  "22429671",
	I1210 06:31:43.254982  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.254987  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.254992  830558 command_runner.go:130] >     },
	I1210 06:31:43.254995  830558 command_runner.go:130] >     {
	I1210 06:31:43.255004  830558 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 06:31:43.255008  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255022  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 06:31:43.255026  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255030  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255038  830558 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1210 06:31:43.255044  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255048  830558 command_runner.go:130] >       "size":  "15391364",
	I1210 06:31:43.255051  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255055  830558 command_runner.go:130] >         "value":  "0"
	I1210 06:31:43.255058  830558 command_runner.go:130] >       },
	I1210 06:31:43.255061  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255065  830558 command_runner.go:130] >       "pinned":  false
	I1210 06:31:43.255069  830558 command_runner.go:130] >     },
	I1210 06:31:43.255072  830558 command_runner.go:130] >     {
	I1210 06:31:43.255081  830558 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:31:43.255088  830558 command_runner.go:130] >       "repoTags":  [
	I1210 06:31:43.255093  830558 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:31:43.255098  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255102  830558 command_runner.go:130] >       "repoDigests":  [
	I1210 06:31:43.255109  830558 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1210 06:31:43.255112  830558 command_runner.go:130] >       ],
	I1210 06:31:43.255116  830558 command_runner.go:130] >       "size":  "267939",
	I1210 06:31:43.255122  830558 command_runner.go:130] >       "uid":  {
	I1210 06:31:43.255129  830558 command_runner.go:130] >         "value":  "65535"
	I1210 06:31:43.255136  830558 command_runner.go:130] >       },
	I1210 06:31:43.255140  830558 command_runner.go:130] >       "username":  "",
	I1210 06:31:43.255143  830558 command_runner.go:130] >       "pinned":  true
	I1210 06:31:43.255147  830558 command_runner.go:130] >     }
	I1210 06:31:43.255150  830558 command_runner.go:130] >   ]
	I1210 06:31:43.255153  830558 command_runner.go:130] > }
	I1210 06:31:43.257476  830558 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:31:43.257497  830558 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:31:43.257505  830558 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:31:43.257607  830558 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:31:43.257674  830558 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:31:43.280486  830558 command_runner.go:130] > {
	I1210 06:31:43.280508  830558 command_runner.go:130] >   "cniconfig": {
	I1210 06:31:43.280515  830558 command_runner.go:130] >     "Networks": [
	I1210 06:31:43.280519  830558 command_runner.go:130] >       {
	I1210 06:31:43.280525  830558 command_runner.go:130] >         "Config": {
	I1210 06:31:43.280531  830558 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 06:31:43.280536  830558 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 06:31:43.280541  830558 command_runner.go:130] >           "Plugins": [
	I1210 06:31:43.280545  830558 command_runner.go:130] >             {
	I1210 06:31:43.280549  830558 command_runner.go:130] >               "Network": {
	I1210 06:31:43.280553  830558 command_runner.go:130] >                 "ipam": {},
	I1210 06:31:43.280572  830558 command_runner.go:130] >                 "type": "loopback"
	I1210 06:31:43.280586  830558 command_runner.go:130] >               },
	I1210 06:31:43.280593  830558 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 06:31:43.280596  830558 command_runner.go:130] >             }
	I1210 06:31:43.280600  830558 command_runner.go:130] >           ],
	I1210 06:31:43.280614  830558 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 06:31:43.280625  830558 command_runner.go:130] >         },
	I1210 06:31:43.280630  830558 command_runner.go:130] >         "IFName": "lo"
	I1210 06:31:43.280633  830558 command_runner.go:130] >       }
	I1210 06:31:43.280637  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280642  830558 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 06:31:43.280652  830558 command_runner.go:130] >     "PluginDirs": [
	I1210 06:31:43.280656  830558 command_runner.go:130] >       "/opt/cni/bin"
	I1210 06:31:43.280660  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280671  830558 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 06:31:43.280679  830558 command_runner.go:130] >     "Prefix": "eth"
	I1210 06:31:43.280682  830558 command_runner.go:130] >   },
	I1210 06:31:43.280686  830558 command_runner.go:130] >   "config": {
	I1210 06:31:43.280693  830558 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 06:31:43.280699  830558 command_runner.go:130] >       "/etc/cdi",
	I1210 06:31:43.280705  830558 command_runner.go:130] >       "/var/run/cdi"
	I1210 06:31:43.280710  830558 command_runner.go:130] >     ],
	I1210 06:31:43.280714  830558 command_runner.go:130] >     "cni": {
	I1210 06:31:43.280725  830558 command_runner.go:130] >       "binDir": "",
	I1210 06:31:43.280729  830558 command_runner.go:130] >       "binDirs": [
	I1210 06:31:43.280732  830558 command_runner.go:130] >         "/opt/cni/bin"
	I1210 06:31:43.280736  830558 command_runner.go:130] >       ],
	I1210 06:31:43.280740  830558 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 06:31:43.280744  830558 command_runner.go:130] >       "confTemplate": "",
	I1210 06:31:43.280747  830558 command_runner.go:130] >       "ipPref": "",
	I1210 06:31:43.280751  830558 command_runner.go:130] >       "maxConfNum": 1,
	I1210 06:31:43.280755  830558 command_runner.go:130] >       "setupSerially": false,
	I1210 06:31:43.280759  830558 command_runner.go:130] >       "useInternalLoopback": false
	I1210 06:31:43.280762  830558 command_runner.go:130] >     },
	I1210 06:31:43.280768  830558 command_runner.go:130] >     "containerd": {
	I1210 06:31:43.280772  830558 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 06:31:43.280776  830558 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 06:31:43.280781  830558 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 06:31:43.280789  830558 command_runner.go:130] >       "runtimes": {
	I1210 06:31:43.280793  830558 command_runner.go:130] >         "runc": {
	I1210 06:31:43.280797  830558 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 06:31:43.280802  830558 command_runner.go:130] >           "PodAnnotations": null,
	I1210 06:31:43.280806  830558 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 06:31:43.280811  830558 command_runner.go:130] >           "cgroupWritable": false,
	I1210 06:31:43.280814  830558 command_runner.go:130] >           "cniConfDir": "",
	I1210 06:31:43.280818  830558 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 06:31:43.280822  830558 command_runner.go:130] >           "io_type": "",
	I1210 06:31:43.280827  830558 command_runner.go:130] >           "options": {
	I1210 06:31:43.280838  830558 command_runner.go:130] >             "BinaryName": "",
	I1210 06:31:43.280850  830558 command_runner.go:130] >             "CriuImagePath": "",
	I1210 06:31:43.280854  830558 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 06:31:43.280858  830558 command_runner.go:130] >             "IoGid": 0,
	I1210 06:31:43.280862  830558 command_runner.go:130] >             "IoUid": 0,
	I1210 06:31:43.280866  830558 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 06:31:43.280872  830558 command_runner.go:130] >             "Root": "",
	I1210 06:31:43.280877  830558 command_runner.go:130] >             "ShimCgroup": "",
	I1210 06:31:43.280883  830558 command_runner.go:130] >             "SystemdCgroup": false
	I1210 06:31:43.280887  830558 command_runner.go:130] >           },
	I1210 06:31:43.280892  830558 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 06:31:43.280898  830558 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 06:31:43.280902  830558 command_runner.go:130] >           "runtimePath": "",
	I1210 06:31:43.280907  830558 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 06:31:43.280912  830558 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 06:31:43.280918  830558 command_runner.go:130] >           "snapshotter": ""
	I1210 06:31:43.280921  830558 command_runner.go:130] >         }
	I1210 06:31:43.280925  830558 command_runner.go:130] >       }
	I1210 06:31:43.280930  830558 command_runner.go:130] >     },
	I1210 06:31:43.280941  830558 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 06:31:43.280949  830558 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 06:31:43.280959  830558 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 06:31:43.280965  830558 command_runner.go:130] >     "disableApparmor": false,
	I1210 06:31:43.280970  830558 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 06:31:43.280976  830558 command_runner.go:130] >     "disableProcMount": false,
	I1210 06:31:43.280983  830558 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 06:31:43.280986  830558 command_runner.go:130] >     "enableCDI": true,
	I1210 06:31:43.280991  830558 command_runner.go:130] >     "enableSelinux": false,
	I1210 06:31:43.280995  830558 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 06:31:43.281002  830558 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 06:31:43.281009  830558 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 06:31:43.281014  830558 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 06:31:43.281021  830558 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 06:31:43.281029  830558 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 06:31:43.281034  830558 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 06:31:43.281040  830558 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281047  830558 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 06:31:43.281052  830558 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 06:31:43.281057  830558 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 06:31:43.281062  830558 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 06:31:43.281067  830558 command_runner.go:130] >   },
	I1210 06:31:43.281071  830558 command_runner.go:130] >   "features": {
	I1210 06:31:43.281076  830558 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 06:31:43.281079  830558 command_runner.go:130] >   },
	I1210 06:31:43.281083  830558 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 06:31:43.281095  830558 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281107  830558 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 06:31:43.281111  830558 command_runner.go:130] >   "runtimeHandlers": [
	I1210 06:31:43.281114  830558 command_runner.go:130] >     {
	I1210 06:31:43.281118  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281129  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281134  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281137  830558 command_runner.go:130] >       }
	I1210 06:31:43.281142  830558 command_runner.go:130] >     },
	I1210 06:31:43.281145  830558 command_runner.go:130] >     {
	I1210 06:31:43.281148  830558 command_runner.go:130] >       "features": {
	I1210 06:31:43.281153  830558 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 06:31:43.281158  830558 command_runner.go:130] >         "user_namespaces": true
	I1210 06:31:43.281161  830558 command_runner.go:130] >       },
	I1210 06:31:43.281168  830558 command_runner.go:130] >       "name": "runc"
	I1210 06:31:43.281171  830558 command_runner.go:130] >     }
	I1210 06:31:43.281174  830558 command_runner.go:130] >   ],
	I1210 06:31:43.281178  830558 command_runner.go:130] >   "status": {
	I1210 06:31:43.281183  830558 command_runner.go:130] >     "conditions": [
	I1210 06:31:43.281186  830558 command_runner.go:130] >       {
	I1210 06:31:43.281190  830558 command_runner.go:130] >         "message": "",
	I1210 06:31:43.281205  830558 command_runner.go:130] >         "reason": "",
	I1210 06:31:43.281209  830558 command_runner.go:130] >         "status": true,
	I1210 06:31:43.281214  830558 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 06:31:43.281220  830558 command_runner.go:130] >       },
	I1210 06:31:43.281224  830558 command_runner.go:130] >       {
	I1210 06:31:43.281230  830558 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 06:31:43.281235  830558 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 06:31:43.281239  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281243  830558 command_runner.go:130] >         "type": "NetworkReady"
	I1210 06:31:43.281246  830558 command_runner.go:130] >       },
	I1210 06:31:43.281249  830558 command_runner.go:130] >       {
	I1210 06:31:43.281271  830558 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 06:31:43.281280  830558 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 06:31:43.281286  830558 command_runner.go:130] >         "status": false,
	I1210 06:31:43.281292  830558 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 06:31:43.281298  830558 command_runner.go:130] >       }
	I1210 06:31:43.281301  830558 command_runner.go:130] >     ]
	I1210 06:31:43.281304  830558 command_runner.go:130] >   }
	I1210 06:31:43.281308  830558 command_runner.go:130] > }
	I1210 06:31:43.283879  830558 cni.go:84] Creating CNI manager for ""
	I1210 06:31:43.283902  830558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:31:43.283924  830558 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:31:43.283950  830558 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:31:43.284076  830558 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:31:43.284154  830558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:31:43.290942  830558 command_runner.go:130] > kubeadm
	I1210 06:31:43.290962  830558 command_runner.go:130] > kubectl
	I1210 06:31:43.290967  830558 command_runner.go:130] > kubelet
	I1210 06:31:43.291913  830558 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:31:43.292013  830558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:31:43.299680  830558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:31:43.314082  830558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:31:43.330260  830558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 06:31:43.347625  830558 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:31:43.352127  830558 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:31:43.352925  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:43.471703  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:44.297320  830558 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:31:44.297353  830558 certs.go:195] generating shared ca certs ...
	I1210 06:31:44.297370  830558 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:44.297565  830558 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:31:44.297620  830558 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:31:44.297640  830558 certs.go:257] generating profile certs ...
	I1210 06:31:44.297767  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:31:44.297844  830558 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:31:44.297905  830558 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:31:44.297923  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:31:44.297952  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:31:44.297969  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:31:44.297986  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:31:44.297997  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:31:44.298022  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:31:44.298036  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:31:44.298051  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:31:44.298107  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:31:44.298147  830558 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:31:44.298160  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:31:44.298194  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:31:44.298223  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:31:44.298262  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:31:44.298323  830558 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:31:44.298363  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem -> /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.298380  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.298399  830558 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.299062  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:31:44.319985  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:31:44.339121  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:31:44.360050  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:31:44.381013  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:31:44.398560  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:31:44.416157  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:31:44.433967  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:31:44.452197  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:31:44.470088  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:31:44.487844  830558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:31:44.505551  830558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:31:44.518440  830558 ssh_runner.go:195] Run: openssl version
	I1210 06:31:44.524638  830558 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:31:44.525053  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.532466  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:31:44.539857  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543663  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543696  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.543746  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:31:44.585800  830558 command_runner.go:130] > 51391683
	I1210 06:31:44.586242  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:31:44.594754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.602172  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:31:44.609494  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613294  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613412  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.613500  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:31:44.654003  830558 command_runner.go:130] > 3ec20f2e
	I1210 06:31:44.654513  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:31:44.661754  830558 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.668842  830558 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:31:44.676441  830558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680175  830558 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680286  830558 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.680373  830558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:31:44.725770  830558 command_runner.go:130] > b5213941
	I1210 06:31:44.726319  830558 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:31:44.734095  830558 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737911  830558 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:31:44.737986  830558 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:31:44.737999  830558 command_runner.go:130] > Device: 259,1	Inode: 1050653     Links: 1
	I1210 06:31:44.738007  830558 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:31:44.738013  830558 command_runner.go:130] > Access: 2025-12-10 06:27:36.644508596 +0000
	I1210 06:31:44.738018  830558 command_runner.go:130] > Modify: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738023  830558 command_runner.go:130] > Change: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738028  830558 command_runner.go:130] >  Birth: 2025-12-10 06:23:32.161941675 +0000
	I1210 06:31:44.738118  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:31:44.779233  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.779410  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:31:44.820004  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.820457  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:31:44.860741  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.861258  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:31:44.902039  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.902514  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:31:44.943742  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.944234  830558 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:31:44.986027  830558 command_runner.go:130] > Certificate will not expire
	I1210 06:31:44.986500  830558 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:31:44.986586  830558 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:31:44.986679  830558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:31:45.063121  830558 cri.go:89] found id: ""
	I1210 06:31:45.063216  830558 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:31:45.099783  830558 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:31:45.099866  830558 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:31:45.099891  830558 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:31:45.101399  830558 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:31:45.101477  830558 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:31:45.101575  830558 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:31:45.115892  830558 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:31:45.116487  830558 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-534748" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.116718  830558 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "functional-534748" cluster setting kubeconfig missing "functional-534748" context setting]
	I1210 06:31:45.117177  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.117949  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.118213  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.118984  830558 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:31:45.119085  830558 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:31:45.119134  830558 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:31:45.119161  830558 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:31:45.119217  830558 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:31:45.119055  830558 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:31:45.119702  830558 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:31:45.137495  830558 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:31:45.137534  830558 kubeadm.go:602] duration metric: took 36.034287ms to restartPrimaryControlPlane
	I1210 06:31:45.137546  830558 kubeadm.go:403] duration metric: took 151.054854ms to StartCluster
	I1210 06:31:45.137576  830558 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.137653  830558 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.138311  830558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:31:45.138643  830558 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:31:45.139043  830558 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:31:45.139108  830558 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:31:45.139177  830558 addons.go:70] Setting storage-provisioner=true in profile "functional-534748"
	I1210 06:31:45.139193  830558 addons.go:239] Setting addon storage-provisioner=true in "functional-534748"
	I1210 06:31:45.139221  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.139239  830558 addons.go:70] Setting default-storageclass=true in profile "functional-534748"
	I1210 06:31:45.139259  830558 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-534748"
	I1210 06:31:45.139583  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.139701  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.145574  830558 out.go:179] * Verifying Kubernetes components...
	I1210 06:31:45.148690  830558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:31:45.190248  830558 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:31:45.190435  830558 kapi.go:59] client config for functional-534748: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:31:45.190756  830558 addons.go:239] Setting addon default-storageclass=true in "functional-534748"
	I1210 06:31:45.190791  830558 host.go:66] Checking if "functional-534748" exists ...
	I1210 06:31:45.192137  830558 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:31:45.207281  830558 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:31:45.210256  830558 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.210285  830558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:31:45.210364  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.229978  830558 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:45.230080  830558 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:31:45.230235  830558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:31:45.286606  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.319378  830558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:31:45.390267  830558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:31:45.420552  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:45.445487  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.049742  830558 node_ready.go:35] waiting up to 6m0s for node "functional-534748" to be "Ready" ...
	I1210 06:31:46.049893  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.049953  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.050234  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050272  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050293  830558 retry.go:31] will retry after 223.621304ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050345  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.050359  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050366  830558 retry.go:31] will retry after 336.04204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.050483  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.274791  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.331904  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.335903  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.335940  830558 retry.go:31] will retry after 342.637774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.387178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.449259  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.449297  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.449332  830558 retry.go:31] will retry after 384.971387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.550591  830558 type.go:168] "Request Body" body=""
	I1210 06:31:46.550669  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.551072  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.679392  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:46.735005  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.738824  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.738907  830558 retry.go:31] will retry after 477.156435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.835016  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:46.898535  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:46.902447  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:46.902505  830558 retry.go:31] will retry after 587.076477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.050787  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.051147  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.216664  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:47.275932  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.275982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.276003  830558 retry.go:31] will retry after 1.079016213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.490360  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:47.550012  830558 type.go:168] "Request Body" body=""
	I1210 06:31:47.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.551946  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:47.551982  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:47.552018  830558 retry.go:31] will retry after 1.089774327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.050900  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.051018  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.051381  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.051446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.355639  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:48.413382  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.416787  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.416855  830558 retry.go:31] will retry after 1.248652089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:31:48.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.642762  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:48.712914  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:48.712955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:48.712975  830558 retry.go:31] will retry after 929.620731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.050356  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.050675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.550083  830558 type.go:168] "Request Body" body=""
	I1210 06:31:49.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.643743  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:49.666178  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:49.715961  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.724279  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.724309  830558 retry.go:31] will retry after 2.037720794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735770  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:49.735805  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:49.735824  830558 retry.go:31] will retry after 1.943919735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:50.050051  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.050130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.550100  830558 type.go:168] "Request Body" body=""
	I1210 06:31:50.550171  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.050020  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.050456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.550105  830558 type.go:168] "Request Body" body=""
	I1210 06:31:51.550181  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.550525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.680862  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:51.745585  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.745620  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.745639  830558 retry.go:31] will retry after 2.112684099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.762814  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:51.821569  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:51.825567  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:51.825603  830558 retry.go:31] will retry after 2.699110245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:52.050957  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.051054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.051439  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.550045  830558 type.go:168] "Request Body" body=""
	I1210 06:31:52.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.050176  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.050253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.050635  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:53.050697  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.550816  830558 type.go:168] "Request Body" body=""
	I1210 06:31:53.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.551250  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.858630  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:53.918073  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:53.921869  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:53.921905  830558 retry.go:31] will retry after 2.635687612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.525086  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:54.550579  830558 type.go:168] "Request Body" body=""
	I1210 06:31:54.550656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.550932  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.585338  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:54.588955  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:54.588990  830558 retry.go:31] will retry after 2.164216453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:55.050098  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:31:55.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.551055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.551113  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:56.050733  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.050815  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.051188  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.549910  830558 type.go:168] "Request Body" body=""
	I1210 06:31:56.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.550302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.558696  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:31:56.634154  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.634201  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.634222  830558 retry.go:31] will retry after 5.842380515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.753466  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:31:56.822332  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:31:56.822371  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:56.822391  830558 retry.go:31] will retry after 4.388036914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:31:57.050861  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.050942  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.051261  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:31:57.550079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.049946  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.050302  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:58.050362  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:58.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:31:58.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.550513  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.050184  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.050262  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.050626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:31:59.550077  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:00.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:32:00.550903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.551281  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.050843  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.051196  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.210631  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:01.270135  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:01.273736  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.273765  830558 retry.go:31] will retry after 7.330909522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:01.550049  830558 type.go:168] "Request Body" body=""
	I1210 06:32:01.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.050246  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.050347  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.050768  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.477366  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:02.540275  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:02.540316  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.540336  830558 retry.go:31] will retry after 13.941322707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:02.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:32:02.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.050685  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.050764  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.051097  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.550804  830558 type.go:168] "Request Body" body=""
	I1210 06:32:03.550886  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.551211  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.051225  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:04.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:32:04.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.050150  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.050229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.050552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.550574  830558 type.go:168] "Request Body" body=""
	I1210 06:32:05.550641  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.550922  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.050749  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.050829  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.051208  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:06.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:06.549940  830558 type.go:168] "Request Body" body=""
	I1210 06:32:06.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.050725  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.050985  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.550782  830558 type.go:168] "Request Body" body=""
	I1210 06:32:07.550862  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.551221  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.051376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:08.051435  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:08.550082  830558 type.go:168] "Request Body" body=""
	I1210 06:32:08.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.605823  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:08.661807  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:08.666022  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:08.666054  830558 retry.go:31] will retry after 18.459732711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:09.050632  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.050712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.051043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.550857  830558 type.go:168] "Request Body" body=""
	I1210 06:32:09.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.050543  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.050622  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.050913  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.550123  830558 type.go:168] "Request Body" body=""
	I1210 06:32:10.550201  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.550566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:10.550627  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:11.050158  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.050241  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:32:11.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.550370  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.050064  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:32:12.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.550550  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.050834  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.050904  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:13.051271  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:13.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:13.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.550390  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.050138  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.050575  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.550278  830558 type.go:168] "Request Body" body=""
	I1210 06:32:14.550375  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.550721  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.050080  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.050169  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.050590  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.550609  830558 type.go:168] "Request Body" body=""
	I1210 06:32:15.550687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.551021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:15.551080  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:16.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.050991  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.482787  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:16.542663  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:16.546278  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.546307  830558 retry.go:31] will retry after 7.242230365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:16.550430  830558 type.go:168] "Request Body" body=""
	I1210 06:32:16.550511  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.550807  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.050649  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.051138  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.550461  830558 type.go:168] "Request Body" body=""
	I1210 06:32:17.550553  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.550825  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.050619  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.050699  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.051034  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:18.051091  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:18.550728  830558 type.go:168] "Request Body" body=""
	I1210 06:32:18.550817  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.551143  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.050890  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.050958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.051259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.550945  830558 type.go:168] "Request Body" body=""
	I1210 06:32:19.551021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.551375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.050449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.549971  830558 type.go:168] "Request Body" body=""
	I1210 06:32:20.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.550340  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:20.550389  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:21.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.550111  830558 type.go:168] "Request Body" body=""
	I1210 06:32:21.550187  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.050899  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.050974  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.051306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.550116  830558 type.go:168] "Request Body" body=""
	I1210 06:32:22.550195  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.550553  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:22.550614  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:23.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.050118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.050459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:32:23.550009  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.550297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.788809  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:23.847955  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:23.851833  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:23.851867  830558 retry.go:31] will retry after 12.516286884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:24.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.050142  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.550248  830558 type.go:168] "Request Body" body=""
	I1210 06:32:24.550322  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.550678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.550736  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:25.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.050546  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.550682  830558 type.go:168] "Request Body" body=""
	I1210 06:32:25.550758  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.551068  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.050934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.051011  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.051351  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.549946  830558 type.go:168] "Request Body" body=""
	I1210 06:32:26.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.050019  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.050429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:27.050507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:27.126908  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:27.191358  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:27.191398  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.191417  830558 retry.go:31] will retry after 11.065094951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:27.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:32:27.550078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.050147  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.050242  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:32:28.550207  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.550541  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.050535  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:29.050590  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:32:29.550933  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.551212  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.550493  830558 type.go:168] "Request Body" body=""
	I1210 06:32:30.550571  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.050667  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.050939  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.050993  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.550742  830558 type.go:168] "Request Body" body=""
	I1210 06:32:31.550827  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.551169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.050826  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.050910  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.051237  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.549938  830558 type.go:168] "Request Body" body=""
	I1210 06:32:32.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.550264  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.050015  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.050091  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.550173  830558 type.go:168] "Request Body" body=""
	I1210 06:32:33.550258  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.550581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.550638  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:34.049992  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.050330  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.550070  830558 type.go:168] "Request Body" body=""
	I1210 06:32:34.550159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.550540  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.050253  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.050340  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.550817  830558 type.go:168] "Request Body" body=""
	I1210 06:32:35.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.551259  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.551320  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.049997  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.050415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.369119  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:32:36.431728  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:36.431764  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.431783  830558 retry.go:31] will retry after 39.090862924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:36.549963  830558 type.go:168] "Request Body" body=""
	I1210 06:32:36.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.550375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.050652  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.050724  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.050986  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.550839  830558 type.go:168] "Request Body" body=""
	I1210 06:32:37.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.551209  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.049961  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:38.050446  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.256706  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:32:38.315606  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:32:38.315652  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.315671  830558 retry.go:31] will retry after 24.874249468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:32:38.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:32:38.550037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.550353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.050035  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.550165  830558 type.go:168] "Request Body" body=""
	I1210 06:32:39.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.550611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.050932  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.051023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.051412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.051484  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.550007  830558 type.go:168] "Request Body" body=""
	I1210 06:32:40.550092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.550424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.050151  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.050226  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.050542  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.549934  830558 type.go:168] "Request Body" body=""
	I1210 06:32:41.550007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.050083  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.050160  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.550115  830558 type.go:168] "Request Body" body=""
	I1210 06:32:42.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.550557  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.550613  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.050266  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.050343  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:32:43.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.050403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.549913  830558 type.go:168] "Request Body" body=""
	I1210 06:32:44.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.050255  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.050774  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:45.050854  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:32:45.550027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:32:46.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.050187  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.050264  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.050652  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.550359  830558 type.go:168] "Request Body" body=""
	I1210 06:32:47.550435  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.550733  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.550791  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:48.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.050612  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.050950  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.550625  830558 type.go:168] "Request Body" body=""
	I1210 06:32:48.550703  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.551027  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.050305  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.050380  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:32:49.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.050293  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.050654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:50.050715  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.550658  830558 type.go:168] "Request Body" body=""
	I1210 06:32:50.550732  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.550987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.050776  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.051172  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.549919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:51.549999  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.550341  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.050371  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.550001  830558 type.go:168] "Request Body" body=""
	I1210 06:32:52.550075  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:53.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.050100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.550167  830558 type.go:168] "Request Body" body=""
	I1210 06:32:53.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.550564  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.550226  830558 type.go:168] "Request Body" body=""
	I1210 06:32:54.550303  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.550719  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:55.049976  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.050343  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.550553  830558 type.go:168] "Request Body" body=""
	I1210 06:32:55.550627  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.550930  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.050724  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.050807  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:32:56.550490  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.550765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.550815  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.050617  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.050698  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.051032  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.550880  830558 type.go:168] "Request Body" body=""
	I1210 06:32:57.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.551319  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.050503  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.050584  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.050859  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.550636  830558 type.go:168] "Request Body" body=""
	I1210 06:32:58.550712  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:58.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.050796  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.051120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.550919  830558 type.go:168] "Request Body" body=""
	I1210 06:32:59.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.551267  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.052318  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:33:00.550554  830558 type.go:168] "Request Body" body=""
	I1210 06:33:00.550633  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.550978  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.050281  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.050351  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.050633  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.050680  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:01.550031  830558 type.go:168] "Request Body" body=""
	I1210 06:33:01.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.050197  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.050277  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.050651  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.550347  830558 type.go:168] "Request Body" body=""
	I1210 06:33:02.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.050076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.190859  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:33:03.248648  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248694  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:03.248794  830558 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:03.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:03.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.550454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.050739  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.050814  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.051133  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.550977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:04.551052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.551392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.050105  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.050184  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.050531  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:33:05.550528  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.550787  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.550829  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.050557  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.050961  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.550801  830558 type.go:168] "Request Body" body=""
	I1210 06:33:06.550879  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.049908  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.050285  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:33:07.550098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.050180  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.050261  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.050656  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.050717  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.549966  830558 type.go:168] "Request Body" body=""
	I1210 06:33:08.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.550358  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.050033  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.050114  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.550043  830558 type.go:168] "Request Body" body=""
	I1210 06:33:09.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.550501  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.050055  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.050401  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.550597  830558 type.go:168] "Request Body" body=""
	I1210 06:33:10.550682  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.551012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.551066  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:11.050806  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.050883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.051219  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.550460  830558 type.go:168] "Request Body" body=""
	I1210 06:33:11.550568  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.550827  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.050716  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.550879  830558 type.go:168] "Request Body" body=""
	I1210 06:33:12.550959  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.551385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.551442  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:13.049924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.050301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.549989  830558 type.go:168] "Request Body" body=""
	I1210 06:33:13.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.550389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.050083  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.050417  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.550127  830558 type.go:168] "Request Body" body=""
	I1210 06:33:14.550200  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.550484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.050632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:15.050702  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.522803  830558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:33:15.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:33:15.550344  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.583628  830558 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587769  830558 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:33:15.587875  830558 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:33:15.590972  830558 out.go:179] * Enabled addons: 
	I1210 06:33:15.594685  830558 addons.go:530] duration metric: took 1m30.455573868s for enable addons: enabled=[]
	I1210 06:33:16.049998  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.050410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.550044  830558 type.go:168] "Request Body" body=""
	I1210 06:33:16.550122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.050003  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.050382  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.549964  830558 type.go:168] "Request Body" body=""
	I1210 06:33:17.550065  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:17.550413  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:18.050065  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.050137  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.050504  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:18.550271  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.550617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.050795  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.050864  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.051173  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.550924  830558 type.go:168] "Request Body" body=""
	I1210 06:33:19.551041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.551366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.551422  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:20.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.050041  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:33:20.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.550354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.050040  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.050115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.550032  830558 type.go:168] "Request Body" body=""
	I1210 06:33:21.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.049927  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.049998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:22.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:22.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:22.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.050681  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.549948  830558 type.go:168] "Request Body" body=""
	I1210 06:33:23.550016  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.550276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.049985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.050060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:24.050460  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:24.550132  830558 type.go:168] "Request Body" body=""
	I1210 06:33:24.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.550552  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.550502  830558 type.go:168] "Request Body" body=""
	I1210 06:33:25.550576  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.550881  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.050647  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.050720  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.051065  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:26.051131  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:26.550815  830558 type.go:168] "Request Body" body=""
	I1210 06:33:26.550883  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.551145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.049919  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.050002  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.050335  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:33:27.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.550459  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.050846  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.051128  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:28.051173  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:28.550887  830558 type.go:168] "Request Body" body=""
	I1210 06:33:28.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.551314  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.050094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.050428  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.549962  830558 type.go:168] "Request Body" body=""
	I1210 06:33:29.550045  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.550327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.050166  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.050611  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.550611  830558 type.go:168] "Request Body" body=""
	I1210 06:33:30.550706  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.551062  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:30.551116  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:31.050373  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.050446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.050762  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:33:31.550642  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.550963  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.050761  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.050841  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.051145  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.550438  830558 type.go:168] "Request Body" body=""
	I1210 06:33:32.550527  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.550836  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.050606  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.050687  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.051001  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.051058  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:33.550797  830558 type.go:168] "Request Body" body=""
	I1210 06:33:33.550872  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.551204  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.050446  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.050542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.050806  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.550570  830558 type.go:168] "Request Body" body=""
	I1210 06:33:34.550651  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.551007  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.050684  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.050765  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.051121  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.051180  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:35.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:33:35.550049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.050068  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.050156  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.050551  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.550267  830558 type.go:168] "Request Body" body=""
	I1210 06:33:36.550341  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.550704  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.050415  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.050506  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.050765  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:33:37.550162  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.550494  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:37.550551  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.050049  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.050196  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.050593  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.550283  830558 type.go:168] "Request Body" body=""
	I1210 06:33:38.550352  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.550637  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.550093  830558 type.go:168] "Request Body" body=""
	I1210 06:33:39.550174  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.550524  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:39.550606  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.050048  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.055554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:33:40.550566  830558 type.go:168] "Request Body" body=""
	I1210 06:33:40.550648  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.551812  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:33:41.050589  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.051002  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.550775  830558 type.go:168] "Request Body" body=""
	I1210 06:33:41.550850  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.551122  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:41.551174  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:42.050929  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.051003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.051301  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.550943  830558 type.go:168] "Request Body" body=""
	I1210 06:33:42.551032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.551344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.049952  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.050027  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.050287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.550011  830558 type.go:168] "Request Body" body=""
	I1210 06:33:43.550090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.550411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.050208  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.050291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.050657  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:44.050712  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:44.549928  830558 type.go:168] "Request Body" body=""
	I1210 06:33:44.550010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.550272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.050538  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.550260  830558 type.go:168] "Request Body" body=""
	I1210 06:33:45.550359  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.051019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.051104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.051470  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:46.051522  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:46.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:33:46.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.550441  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.050177  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.050256  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.050580  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.550565  830558 type.go:168] "Request Body" body=""
	I1210 06:33:47.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.550895  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.050718  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.050799  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.051139  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.550959  830558 type.go:168] "Request Body" body=""
	I1210 06:33:48.551034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.551396  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:48.551454  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:49.049969  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:33:49.550097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.550429  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.050016  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.050484  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:33:50.550046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.550304  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.049996  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.050078  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:51.050452  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:51.550024  830558 type.go:168] "Request Body" body=""
	I1210 06:33:51.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.550445  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.049971  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.050042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.050360  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:33:52.550043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.550379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.050013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.050485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:53.050541  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.549985  830558 type.go:168] "Request Body" body=""
	I1210 06:33:53.550061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.550363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.050022  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.050106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:33:54.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.550641  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.050327  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.550478  830558 type.go:168] "Request Body" body=""
	I1210 06:33:55.550556  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.550933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:55.550991  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:56.050594  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.050672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.051056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.550810  830558 type.go:168] "Request Body" body=""
	I1210 06:33:56.550888  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.551156  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.050906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.050979  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.051317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:33:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.550388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.049906  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.049976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.050249  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.050294  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:58.549945  830558 type.go:168] "Request Body" body=""
	I1210 06:33:58.550024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.550385  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.050095  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.050176  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.050522  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.550222  830558 type.go:168] "Request Body" body=""
	I1210 06:33:59.550309  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.050052  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.050455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:00.050684  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:00.549926  830558 type.go:168] "Request Body" body=""
	I1210 06:34:00.550006  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.550355  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.050662  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.050737  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.051064  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.550884  830558 type.go:168] "Request Body" body=""
	I1210 06:34:01.550964  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.551306  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.050041  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.550195  830558 type.go:168] "Request Body" body=""
	I1210 06:34:02.550268  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.550561  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:02.550618  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.050297  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.050373  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.050719  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.550039  830558 type.go:168] "Request Body" body=""
	I1210 06:34:03.550121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.550444  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.049984  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.550075  830558 type.go:168] "Request Body" body=""
	I1210 06:34:04.550154  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.550510  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.050122  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.050591  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.050642  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.550387  830558 type.go:168] "Request Body" body=""
	I1210 06:34:05.550492  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.050542  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.050630  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.050966  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.550619  830558 type.go:168] "Request Body" body=""
	I1210 06:34:06.550697  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.551056  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.050145  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.050214  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.050555  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:07.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.550443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.550518  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.050047  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.050151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.050544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:34:08.550038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.550347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:09.550115  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.550495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:09.550556  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.050581  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.050987  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:34:10.550954  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.551276  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.050073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.050436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.550577  830558 type.go:168] "Request Body" body=""
	I1210 06:34:11.550654  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:11.550920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:11.550968  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:12.050759  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:12.549950  830558 type.go:168] "Request Body" body=""
	I1210 06:34:12.550032  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:12.550372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.050891  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.051155  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:13.550910  830558 type.go:168] "Request Body" body=""
	I1210 06:34:13.550990  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:13.551324  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:13.551384  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:14.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.050372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:14.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:34:14.550132  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:14.550454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.050143  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:15.550590  830558 type.go:168] "Request Body" body=""
	I1210 06:34:15.550665  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:15.551006  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:16.050139  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.050219  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.050581  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:16.050651  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:16.550343  830558 type.go:168] "Request Body" body=""
	I1210 06:34:16.550420  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:16.550746  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.050583  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.050659  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.051004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:17.550305  830558 type.go:168] "Request Body" body=""
	I1210 06:34:17.550379  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:17.550661  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.050492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:18.550227  830558 type.go:168] "Request Body" body=""
	I1210 06:34:18.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:18.550654  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:18.550708  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:19.049907  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.049978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.050300  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:19.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:19.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:19.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.050129  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.050682  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:20.550512  830558 type.go:168] "Request Body" body=""
	I1210 06:34:20.550605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:20.550929  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:20.550983  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:21.050722  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.050804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.051141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:21.550824  830558 type.go:168] "Request Body" body=""
	I1210 06:34:21.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:21.551258  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.050508  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.050581  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:22.550614  830558 type.go:168] "Request Body" body=""
	I1210 06:34:22.550689  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:22.551037  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:22.551097  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:23.050847  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.050935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:23.549922  830558 type.go:168] "Request Body" body=""
	I1210 06:34:23.550000  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:23.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.049978  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.050066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.050419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:24.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:34:24.550230  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:24.550613  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:25.050894  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.051235  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:25.051280  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:25.550372  830558 type.go:168] "Request Body" body=""
	I1210 06:34:25.550449  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:25.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.050683  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.050763  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.051110  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:26.550564  830558 type.go:168] "Request Body" body=""
	I1210 06:34:26.550636  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:26.550899  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.050671  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.050748  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.051102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:27.550781  830558 type.go:168] "Request Body" body=""
	I1210 06:34:27.550860  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:27.551195  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:27.551252  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:28.049904  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.049986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.050254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:28.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:34:28.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:28.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.050220  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.050298  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.050678  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:29.549921  830558 type.go:168] "Request Body" body=""
	I1210 06:34:29.549996  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:29.550254  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:30.050073  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.050509  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:30.050563  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:30.550516  830558 type.go:168] "Request Body" body=""
	I1210 06:34:30.550620  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:30.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.050272  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.050339  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.050673  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:31.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:31.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:31.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:32.050170  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.050245  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.050587  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:32.050647  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:32.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:34:32.550386  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:32.550677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.050375  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:33.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:34:33.550108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:33.550519  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:34.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.050710  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.051024  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:34.051085  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:34.550840  830558 type.go:168] "Request Body" body=""
	I1210 06:34:34.550922  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:34.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.050007  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.050092  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.050451  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:35.550503  830558 type.go:168] "Request Body" body=""
	I1210 06:34:35.550574  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:35.550888  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:36.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.050822  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:36.051321  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:36.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:34:36.551056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:36.551466  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.050811  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.051215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:37.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:37.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:37.550350  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.050107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:38.550034  830558 type.go:168] "Request Body" body=""
	I1210 06:34:38.550118  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:38.550387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:38.550431  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:39.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:39.550021  830558 type.go:168] "Request Body" body=""
	I1210 06:34:39.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:39.550455  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.050212  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.050299  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.050616  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:40.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:34:40.550800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:40.551131  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:40.551184  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:41.050959  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.051050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.051405  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:41.550069  830558 type.go:168] "Request Body" body=""
	I1210 06:34:41.550140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:41.550408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.050053  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.050128  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.050423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:42.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:34:42.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:42.550426  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:43.049964  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.050364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:43.050427  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:43.550060  830558 type.go:168] "Request Body" body=""
	I1210 06:34:43.550138  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:43.550432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.050174  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.050254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.050577  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:44.550265  830558 type.go:168] "Request Body" body=""
	I1210 06:34:44.550337  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:44.550631  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:45.050106  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.050225  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.051475  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:34:45.051555  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:45.550586  830558 type.go:168] "Request Body" body=""
	I1210 06:34:45.550670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:45.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.050308  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.050387  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.050713  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:46.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:34:46.550668  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:46.551031  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.050814  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.050890  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.051189  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:47.550459  830558 type.go:168] "Request Body" body=""
	I1210 06:34:47.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:47.550844  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:47.550902  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:48.050660  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.050735  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.051052  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:48.550831  830558 type.go:168] "Request Body" body=""
	I1210 06:34:48.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:48.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.050342  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.050418  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.050723  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:49.550042  830558 type.go:168] "Request Body" body=""
	I1210 06:34:49.550119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:49.550450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:50.050211  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.050296  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.050688  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:50.050747  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:50.550446  830558 type.go:168] "Request Body" body=""
	I1210 06:34:50.550545  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:50.550803  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.050575  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.050992  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:51.550764  830558 type.go:168] "Request Body" body=""
	I1210 06:34:51.550839  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:51.551183  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:52.050947  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.051295  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:52.051339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:52.550020  830558 type.go:168] "Request Body" body=""
	I1210 06:34:52.550102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:52.550487  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.050304  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.050648  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:53.549965  830558 type.go:168] "Request Body" body=""
	I1210 06:34:53.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:53.550369  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.050479  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:54.550177  830558 type.go:168] "Request Body" body=""
	I1210 06:34:54.550254  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:54.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:54.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:55.049960  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.050038  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.050307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:55.550536  830558 type.go:168] "Request Body" body=""
	I1210 06:34:55.550618  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:55.550953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.050765  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.050845  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.051194  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:56.549892  830558 type.go:168] "Request Body" body=""
	I1210 06:34:56.549977  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:56.550245  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:57.049958  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:57.050439  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:57.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:34:57.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:57.550412  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:58.550038  830558 type.go:168] "Request Body" body=""
	I1210 06:34:58.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:58.550398  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:59.050037  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.050434  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:59.050536  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:59.550090  830558 type.go:168] "Request Body" body=""
	I1210 06:34:59.550165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:59.550488  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.050082  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.050172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.050532  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:00.550871  830558 type.go:168] "Request Body" body=""
	I1210 06:35:00.551043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:00.551414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.050056  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:01.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:35:01.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:01.550506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:01.550566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:02.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.050334  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.050718  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:02.549994  830558 type.go:168] "Request Body" body=""
	I1210 06:35:02.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:02.550338  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:03.550201  830558 type.go:168] "Request Body" body=""
	I1210 06:35:03.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:03.550618  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:03.550677  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:04.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.050053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.050326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:04.550002  830558 type.go:168] "Request Body" body=""
	I1210 06:35:04.550073  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:04.550366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.050111  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.050435  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:05.550392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:05.550487  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:05.550754  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:05.550797  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:06.050578  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.050658  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.051028  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:06.550698  830558 type.go:168] "Request Body" body=""
	I1210 06:35:06.550789  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:06.551170  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.050527  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.050605  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:07.550670  830558 type.go:168] "Request Body" body=""
	I1210 06:35:07.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:07.551130  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:07.551186  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:08.049928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.050023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.050388  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:08.550709  830558 type.go:168] "Request Body" body=""
	I1210 06:35:08.550783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:08.551109  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.050933  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.051017  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.051361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:09.550061  830558 type.go:168] "Request Body" body=""
	I1210 06:35:09.550147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:09.550539  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:10.049990  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.050069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.050353  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:10.050409  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:10.550333  830558 type.go:168] "Request Body" body=""
	I1210 06:35:10.550412  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:10.550769  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.050573  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.050649  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.050998  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:11.550270  830558 type.go:168] "Request Body" body=""
	I1210 06:35:11.550348  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:11.550636  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:12.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:12.050544  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:12.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:12.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:12.550407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.050003  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.050262  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:13.549947  830558 type.go:168] "Request Body" body=""
	I1210 06:35:13.550020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:13.550364  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.049948  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.050033  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.050379  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:14.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:14.550069  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:14.550374  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:14.550430  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:15.049977  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.050511  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:15.550549  830558 type.go:168] "Request Body" body=""
	I1210 06:35:15.550643  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:15.550979  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.050252  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.050330  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.050628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:16.550008  830558 type.go:168] "Request Body" body=""
	I1210 06:35:16.550088  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:16.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:16.550501  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:17.050213  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.050312  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.050693  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:17.549908  830558 type.go:168] "Request Body" body=""
	I1210 06:35:17.549986  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:17.550246  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.049930  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.050001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.050297  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:18.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:35:18.550063  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:18.550458  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:18.550526  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:19.050194  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.050560  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:19.550268  830558 type.go:168] "Request Body" body=""
	I1210 06:35:19.550350  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:19.550659  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.050392  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.050488  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.050847  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:35:20.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:20.551004  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:20.551047  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:21.050812  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.050894  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:21.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:21.551007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:21.551349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.049936  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.050007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.050275  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:22.549967  830558 type.go:168] "Request Body" body=""
	I1210 06:35:22.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:22.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:23.050135  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.050584  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:23.050648  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:23.550304  830558 type.go:168] "Request Body" body=""
	I1210 06:35:23.550376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:23.550716  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.050410  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.050504  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.050842  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:24.550612  830558 type.go:168] "Request Body" body=""
	I1210 06:35:24.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:24.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:25.050637  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.050728  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.051015  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:25.051074  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:25.550298  830558 type.go:168] "Request Body" body=""
	I1210 06:35:25.550378  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:25.550744  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.050574  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.050656  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.051021  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:26.550326  830558 type.go:168] "Request Body" body=""
	I1210 06:35:26.550392  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:26.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:27.550033  830558 type.go:168] "Request Body" body=""
	I1210 06:35:27.550110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:27.550485  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:27.550550  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:28.050833  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.050903  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.051180  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:28.550989  830558 type.go:168] "Request Body" body=""
	I1210 06:35:28.551079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:28.551403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.050086  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.050165  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.050503  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:29.550827  830558 type.go:168] "Request Body" body=""
	I1210 06:35:29.550916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:29.551182  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:29.551227  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:30.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.050049  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.050563  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:30.550365  830558 type.go:168] "Request Body" body=""
	I1210 06:35:30.550440  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:30.550785  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.050058  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.050147  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.050461  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:31.550025  830558 type.go:168] "Request Body" body=""
	I1210 06:35:31.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:31.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:32.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.050102  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.050406  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:32.050456  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:32.549918  830558 type.go:168] "Request Body" body=""
	I1210 06:35:32.549989  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:32.550312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.050009  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.050085  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:33.550155  830558 type.go:168] "Request Body" body=""
	I1210 06:35:33.550240  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:33.550628  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:34.050314  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.050390  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.050677  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:34.050723  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:34.550592  830558 type.go:168] "Request Body" body=""
	I1210 06:35:34.550685  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:34.551061  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.050965  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.051309  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:35.550125  830558 type.go:168] "Request Body" body=""
	I1210 06:35:35.550193  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:35.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.050024  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:36.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:35:36.550209  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:36.550544  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:36.550600  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:37.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.050376  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.050709  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:37.550074  830558 type.go:168] "Request Body" body=""
	I1210 06:35:37.550172  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:37.550549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.050665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:38.550505  830558 type.go:168] "Request Body" body=""
	I1210 06:35:38.550588  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:38.550849  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:38.550901  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:39.050640  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.050721  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.051071  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:39.550851  830558 type.go:168] "Request Body" body=""
	I1210 06:35:39.550926  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:39.551256  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.050535  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.050625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.050933  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:40.550928  830558 type.go:168] "Request Body" body=""
	I1210 06:35:40.551010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:40.551608  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:40.551663  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:41.049981  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.050352  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:41.550016  830558 type.go:168] "Request Body" body=""
	I1210 06:35:41.550093  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:41.550361  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.050005  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.050080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:42.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:35:42.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:42.550359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:43.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.050099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.050413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:43.050491  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:43.550149  830558 type.go:168] "Request Body" body=""
	I1210 06:35:43.550232  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:43.550536  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.050209  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.050286  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.050649  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:44.550377  830558 type.go:168] "Request Body" body=""
	I1210 06:35:44.550446  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:44.550724  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:45.050153  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.050238  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.050595  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:45.050650  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:45.549952  830558 type.go:168] "Request Body" body=""
	I1210 06:35:45.550034  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:45.550414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:46.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.050372  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.055238  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:35:46.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:35:46.550287  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:46.550675  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:47.050432  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.050548  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.050914  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:47.050975  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:47.550717  830558 type.go:168] "Request Body" body=""
	I1210 06:35:47.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:47.551174  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.049903  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.049980  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.050317  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:48.550065  830558 type.go:168] "Request Body" body=""
	I1210 06:35:48.550151  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:48.550558  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:49.050850  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.050920  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.051255  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:49.051361  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:49.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:35:49.550107  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:49.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.050183  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.050272  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.050684  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:50.550583  830558 type.go:168] "Request Body" body=""
	I1210 06:35:50.550655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:50.550936  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.050719  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.050800  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.051144  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:51.550954  830558 type.go:168] "Request Body" body=""
	I1210 06:35:51.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:51.551356  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:51.551411  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:52.050680  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.050756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.051067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:52.550548  830558 type.go:168] "Request Body" body=""
	I1210 06:35:52.550625  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:52.550952  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.050711  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.051146  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:53.550886  830558 type.go:168] "Request Body" body=""
	I1210 06:35:53.550957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:53.551220  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:54.049953  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.050414  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:54.050492  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:54.549996  830558 type.go:168] "Request Body" body=""
	I1210 06:35:54.550071  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:54.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.050715  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.050792  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.051106  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:55.550346  830558 type.go:168] "Request Body" body=""
	I1210 06:35:55.550419  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:55.550782  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:56.050628  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.050708  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.051118  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:56.051182  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:56.550940  830558 type.go:168] "Request Body" body=""
	I1210 06:35:56.551022  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:56.551289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.049999  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.050392  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:57.550013  830558 type.go:168] "Request Body" body=""
	I1210 06:35:57.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:57.550492  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.050195  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.050269  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.050569  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:58.550015  830558 type.go:168] "Request Body" body=""
	I1210 06:35:58.550099  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:58.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:35:58.550507  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:35:59.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.050074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.050407  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:35:59.550696  830558 type.go:168] "Request Body" body=""
	I1210 06:35:59.550768  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:35:59.551102  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.050842  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.050924  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.051234  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:00.549998  830558 type.go:168] "Request Body" body=""
	I1210 06:36:00.550074  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:00.550399  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:01.049954  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.050035  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.050328  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:01.050375  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:01.549990  830558 type.go:168] "Request Body" body=""
	I1210 06:36:01.550068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:01.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.050117  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.050215  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.050573  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:02.549939  830558 type.go:168] "Request Body" body=""
	I1210 06:36:02.550013  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:02.550284  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:03.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.050086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.050442  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:03.050527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:03.550184  830558 type.go:168] "Request Body" body=""
	I1210 06:36:03.550270  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:03.550632  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.050901  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.050978  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.051312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:04.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:04.550082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:04.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.050117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.050489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:05.550443  830558 type.go:168] "Request Body" body=""
	I1210 06:36:05.550542  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:05.550796  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:05.550839  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:06.050579  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.050657  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.051012  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:06.550829  830558 type.go:168] "Request Body" body=""
	I1210 06:36:06.550907  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:06.551240  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.050493  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.050573  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.050889  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:07.550693  830558 type.go:168] "Request Body" body=""
	I1210 06:36:07.550778  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:07.551124  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:07.551183  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:08.050922  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.051004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.051346  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:08.549944  830558 type.go:168] "Request Body" body=""
	I1210 06:36:08.550015  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:08.550288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.050050  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.050402  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:09.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:36:09.550052  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:09.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:10.050587  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.050670  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.050953  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:10.051003  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:10.550899  830558 type.go:168] "Request Body" body=""
	I1210 06:36:10.550976  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:10.551312  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.051047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.051365  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:11.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:11.550062  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:11.550380  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.050101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.050424  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:12.550175  830558 type.go:168] "Request Body" body=""
	I1210 06:36:12.550251  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:12.550626  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:12.550686  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:13.049890  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.049962  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.050215  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:13.549891  830558 type.go:168] "Request Body" body=""
	I1210 06:36:13.549970  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:13.550296  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.049986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.050082  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.050411  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:14.550126  830558 type.go:168] "Request Body" body=""
	I1210 06:36:14.550211  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:14.550507  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:15.050062  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.050145  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:15.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:15.550556  830558 type.go:168] "Request Body" body=""
	I1210 06:36:15.550635  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:15.550967  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.050861  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.051148  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:16.550930  830558 type.go:168] "Request Body" body=""
	I1210 06:36:16.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:16.551326  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.050113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:17.550156  830558 type.go:168] "Request Body" body=""
	I1210 06:36:17.550229  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:17.550520  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:17.550565  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:18.050027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.050108  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.050447  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:18.550193  830558 type.go:168] "Request Body" body=""
	I1210 06:36:18.550278  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:18.550612  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.049970  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.050368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:19.550014  830558 type.go:168] "Request Body" body=""
	I1210 06:36:19.550089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:19.550419  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:20.050206  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.050292  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.050696  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:20.050759  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:20.550656  830558 type.go:168] "Request Body" body=""
	I1210 06:36:20.550733  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:20.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.050835  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.050921  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.051263  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:21.549995  830558 type.go:168] "Request Body" body=""
	I1210 06:36:21.550076  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:21.550423  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.050133  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.050216  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.050512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:22.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:22.550116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:22.550449  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:22.550527  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:23.050001  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.050090  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.050430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:23.549960  830558 type.go:168] "Request Body" body=""
	I1210 06:36:23.550028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:23.550287  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.050045  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.050121  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:24.550232  830558 type.go:168] "Request Body" body=""
	I1210 06:36:24.550319  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:24.550669  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:24.550726  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:25.049975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.050347  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:25.550435  830558 type.go:168] "Request Body" body=""
	I1210 06:36:25.550531  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:25.550872  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.050576  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.050655  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.051009  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:26.550723  830558 type.go:168] "Request Body" body=""
	I1210 06:36:26.550798  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:26.551067  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:26.551119  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:27.050878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.050952  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.051289  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:27.550017  830558 type.go:168] "Request Body" body=""
	I1210 06:36:27.550094  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:27.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.049942  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.050024  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.050288  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:28.550006  830558 type.go:168] "Request Body" body=""
	I1210 06:36:28.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:28.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:29.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.050234  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.050566  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:29.050621  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:29.550905  830558 type.go:168] "Request Body" body=""
	I1210 06:36:29.550972  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:29.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.050116  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.050204  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.050559  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:30.550551  830558 type.go:168] "Request Body" body=""
	I1210 06:36:30.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:30.550956  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:31.050278  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.050353  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.050643  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:31.050689  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:31.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:36:31.550084  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:31.550415  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.050146  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.050220  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.050568  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:32.550834  830558 type.go:168] "Request Body" body=""
	I1210 06:36:32.550909  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:32.551181  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.049926  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.050020  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.050320  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:33.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:36:33.550101  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:33.550421  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:33.550496  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:34.050148  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.050221  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.050554  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:34.550035  830558 type.go:168] "Request Body" body=""
	I1210 06:36:34.550113  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:34.550403  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.050133  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.050454  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:35.550293  830558 type.go:168] "Request Body" body=""
	I1210 06:36:35.550366  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:35.550646  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:35.550688  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:36.050032  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.050119  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.050506  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:36.550078  830558 type.go:168] "Request Body" body=""
	I1210 06:36:36.550152  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:36.550514  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.050074  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.050153  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:37.550003  830558 type.go:168] "Request Body" body=""
	I1210 06:36:37.550086  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:37.550452  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:38.050242  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.050345  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.050820  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:38.050886  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:38.550627  830558 type.go:168] "Request Body" body=""
	I1210 06:36:38.550702  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:38.550965  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.050786  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.050858  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.051199  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:39.550826  830558 type.go:168] "Request Body" body=""
	I1210 06:36:39.550908  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:39.551239  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.049947  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.050037  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.050342  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:40.550382  830558 type.go:168] "Request Body" body=""
	I1210 06:36:40.550458  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:40.550826  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:40.550883  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:41.050667  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.050745  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.051117  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:41.550878  830558 type.go:168] "Request Body" body=""
	I1210 06:36:41.550958  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:41.551274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.050917  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.050997  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.051354  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:42.550036  830558 type.go:168] "Request Body" body=""
	I1210 06:36:42.550117  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:42.550436  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:43.049951  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.050067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.050321  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:43.050369  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:43.549987  830558 type.go:168] "Request Body" body=""
	I1210 06:36:43.550066  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:43.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.050824  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.050905  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.051231  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:44.550482  830558 type.go:168] "Request Body" body=""
	I1210 06:36:44.550555  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:44.550855  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:45.050825  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.050916  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.051222  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:45.051274  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:45.550929  830558 type.go:168] "Request Body" body=""
	I1210 06:36:45.551008  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:45.551345  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.049915  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.050010  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.050329  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:46.549983  830558 type.go:168] "Request Body" body=""
	I1210 06:36:46.550060  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:46.550400  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.050060  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.050141  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.050495  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:47.549925  830558 type.go:168] "Request Body" body=""
	I1210 06:36:47.549995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:47.550273  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:47.550317  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:48.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.050095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.050460  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:48.550037  830558 type.go:168] "Request Body" body=""
	I1210 06:36:48.550112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:48.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.050030  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.050116  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.050497  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:49.550029  830558 type.go:168] "Request Body" body=""
	I1210 06:36:49.550104  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:49.550496  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:49.550554  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:50.050050  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.050125  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.050500  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:50.550519  830558 type.go:168] "Request Body" body=""
	I1210 06:36:50.550589  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:50.550850  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.050731  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.050803  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.051136  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:51.550907  830558 type.go:168] "Request Body" body=""
	I1210 06:36:51.550985  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:51.551292  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:51.551347  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:52.049966  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.050040  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.050305  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:52.549993  830558 type.go:168] "Request Body" body=""
	I1210 06:36:52.550070  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:52.550404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.050044  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.050127  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:53.550649  830558 type.go:168] "Request Body" body=""
	I1210 06:36:53.550726  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:53.551013  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:54.050845  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.050929  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.051278  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:54.051340  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:54.549988  830558 type.go:168] "Request Body" body=""
	I1210 06:36:54.550067  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:54.550384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.050057  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.050384  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:55.550589  830558 type.go:168] "Request Body" body=""
	I1210 06:36:55.550672  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:55.550984  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.050875  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.050955  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.051282  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:56.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:36:56.550042  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:56.550349  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:56.550406  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:57.050072  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.050159  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.050499  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:57.549979  830558 type.go:168] "Request Body" body=""
	I1210 06:36:57.550054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:57.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.049963  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.050043  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:58.549986  830558 type.go:168] "Request Body" body=""
	I1210 06:36:58.550064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:58.550409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:36:58.550486  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:36:59.050159  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.050244  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.050617  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:36:59.550004  830558 type.go:168] "Request Body" body=""
	I1210 06:36:59.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:36:59.550332  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.050088  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.050180  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.050543  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:00.550848  830558 type.go:168] "Request Body" body=""
	I1210 06:37:00.550935  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:00.551280  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:00.551339  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:01.050564  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.050644  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.050904  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:01.550685  830558 type.go:168] "Request Body" body=""
	I1210 06:37:01.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:01.551120  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.050955  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.051039  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.051359  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:02.550089  830558 type.go:168] "Request Body" body=""
	I1210 06:37:02.550158  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:02.550512  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:03.049995  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.050079  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.050427  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:03.050509  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:03.549974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:03.550095  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:03.550438  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.050664  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.050742  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.051055  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:04.550863  830558 type.go:168] "Request Body" body=""
	I1210 06:37:04.550938  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:04.551272  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.049983  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.050061  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.050389  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:05.550411  830558 type.go:168] "Request Body" body=""
	I1210 06:37:05.550500  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:05.550764  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:05.550808  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:06.050441  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.050533  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.050866  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:06.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:06.550761  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:06.551104  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.050870  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.050944  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.051251  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:07.549972  830558 type.go:168] "Request Body" body=""
	I1210 06:37:07.550053  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:07.550410  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:08.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.050239  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.050601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:08.050664  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:08.549949  830558 type.go:168] "Request Body" body=""
	I1210 06:37:08.550023  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:08.550357  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.050105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.050443  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:09.550204  830558 type.go:168] "Request Body" body=""
	I1210 06:37:09.550291  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:09.550711  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:10.050422  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.050521  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.050851  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:10.050899  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:10.550710  830558 type.go:168] "Request Body" body=""
	I1210 06:37:10.550785  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:10.551141  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.050942  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.051021  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.051363  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:11.549975  830558 type.go:168] "Request Body" body=""
	I1210 06:37:11.550047  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:11.550368  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.050025  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.050103  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.050446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:12.550179  830558 type.go:168] "Request Body" body=""
	I1210 06:37:12.550253  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:12.550680  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:12.550735  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:13.049956  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.050028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.050303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:13.550005  830558 type.go:168] "Request Body" body=""
	I1210 06:37:13.550080  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:13.550413  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.050155  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.050237  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.050614  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:14.550902  830558 type.go:168] "Request Body" body=""
	I1210 06:37:14.550998  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:14.551307  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:14.551376  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:15.050054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.050140  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.050549  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:15.550678  830558 type.go:168] "Request Body" body=""
	I1210 06:37:15.550756  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:15.551093  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.050865  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.050946  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.051228  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:16.549930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:16.550004  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:16.550336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:17.049921  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.049995  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.050336  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:17.050393  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:17.550046  830558 type.go:168] "Request Body" body=""
	I1210 06:37:17.550123  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:17.550394  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.049994  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.050068  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.050366  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:18.550054  830558 type.go:168] "Request Body" body=""
	I1210 06:37:18.550130  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:18.550489  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:19.050104  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.050185  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.050515  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:19.050566  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:19.550225  830558 type.go:168] "Request Body" body=""
	I1210 06:37:19.550302  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:19.550665  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.050424  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.050518  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.050884  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:20.550762  830558 type.go:168] "Request Body" body=""
	I1210 06:37:20.550835  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:20.551162  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:21.050936  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.051012  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.051344  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:21.051398  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:21.550076  830558 type.go:168] "Request Body" body=""
	I1210 06:37:21.550149  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:21.550491  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.050770  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.050844  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.051151  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:22.550952  830558 type.go:168] "Request Body" body=""
	I1210 06:37:22.551036  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:22.551372  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.050031  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.050110  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.050450  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:23.550623  830558 type.go:168] "Request Body" body=""
	I1210 06:37:23.550694  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:23.551091  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:23.551140  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:24.050873  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.050957  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.051303  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:24.550019  830558 type.go:168] "Request Body" body=""
	I1210 06:37:24.550100  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:24.550430  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.050726  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.050795  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.051103  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:25.550027  830558 type.go:168] "Request Body" body=""
	I1210 06:37:25.550105  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:25.550431  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:26.050042  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.050122  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.050517  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:26.050574  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:26.550022  830558 type.go:168] "Request Body" body=""
	I1210 06:37:26.550096  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:26.550377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.050012  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.050089  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:27.550158  830558 type.go:168] "Request Body" body=""
	I1210 06:37:27.550236  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:27.550601  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.049974  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.050054  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.050437  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:28.550041  830558 type.go:168] "Request Body" body=""
	I1210 06:37:28.550120  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:28.550456  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:28.550530  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:29.050006  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.050087  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.050404  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:29.550133  830558 type.go:168] "Request Body" body=""
	I1210 06:37:29.550213  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:29.550518  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.050099  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.050189  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.050525  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:30.550673  830558 type.go:168] "Request Body" body=""
	I1210 06:37:30.550754  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:30.551134  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:30.551190  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:31.050885  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.050960  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.051274  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:31.550000  830558 type.go:168] "Request Body" body=""
	I1210 06:37:31.550131  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:31.550446  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.050028  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.050112  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.050408  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:32.550681  830558 type.go:168] "Request Body" body=""
	I1210 06:37:32.550771  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:32.551081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:33.050860  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.050934  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.051248  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:33.051305  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:33.550026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:33.550106  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:33.550448  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.049973  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.050046  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.050378  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:34.549977  830558 type.go:168] "Request Body" body=""
	I1210 06:37:34.550051  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:34.550376  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.050023  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.050098  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.050432  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:35.550550  830558 type.go:168] "Request Body" body=""
	I1210 06:37:35.550628  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:35.550892  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:35.550953  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:36.050690  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.050767  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.051081  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:36.550920  830558 type.go:168] "Request Body" body=""
	I1210 06:37:36.551001  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:36.551377  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.050702  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.050783  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.051058  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:37.550812  830558 type.go:168] "Request Body" body=""
	I1210 06:37:37.550889  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:37.551223  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:37.551281  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:38.049987  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.050064  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.050409  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:38.550706  830558 type.go:168] "Request Body" body=""
	I1210 06:37:38.550780  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:38.551043  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.050813  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.050899  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.051232  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:39.549927  830558 type.go:168] "Request Body" body=""
	I1210 06:37:39.550005  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:39.550337  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:40.050658  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.050741  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.051035  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:40.051084  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:40.549980  830558 type.go:168] "Request Body" body=""
	I1210 06:37:40.550072  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:40.550505  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.050026  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.050097  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.050387  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:41.550563  830558 type.go:168] "Request Body" body=""
	I1210 06:37:41.550631  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:41.550897  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:42.050745  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.050826  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.051169  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:42.051228  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:42.550950  830558 type.go:168] "Request Body" body=""
	I1210 06:37:42.551028  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:42.551348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.050570  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.050646  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.050920  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:43.550724  830558 type.go:168] "Request Body" body=""
	I1210 06:37:43.550804  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:43.551126  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:44.050930  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.051007  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.051348  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:37:44.051402  830558 node_ready.go:55] error getting node "functional-534748" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-534748": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:37:44.550445  830558 type.go:168] "Request Body" body=""
	I1210 06:37:44.550537  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:44.550795  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.050638  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.050730  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.051044  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:45.550527  830558 type.go:168] "Request Body" body=""
	I1210 06:37:45.550601  830558 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-534748" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:37:45.550931  830558 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:37:46.050582  830558 type.go:168] "Request Body" body=""
	I1210 06:37:46.050725  830558 node_ready.go:38] duration metric: took 6m0.000935284s for node "functional-534748" to be "Ready" ...
	I1210 06:37:46.053848  830558 out.go:203] 
	W1210 06:37:46.056787  830558 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:37:46.056817  830558 out.go:285] * 
	W1210 06:37:46.059108  830558 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:37:46.062914  830558 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:37:53 functional-534748 containerd[5224]: time="2025-12-10T06:37:53.520352544Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:54 functional-534748 containerd[5224]: time="2025-12-10T06:37:54.565252348Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 10 06:37:54 functional-534748 containerd[5224]: time="2025-12-10T06:37:54.567605851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 10 06:37:54 functional-534748 containerd[5224]: time="2025-12-10T06:37:54.574773209Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:54 functional-534748 containerd[5224]: time="2025-12-10T06:37:54.575228705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:55 functional-534748 containerd[5224]: time="2025-12-10T06:37:55.553226651Z" level=info msg="No images store for sha256:0c729ebacec82a4a862e39f331b1dc02cab7e87861cddd7a8db1fd64af001e55"
	Dec 10 06:37:55 functional-534748 containerd[5224]: time="2025-12-10T06:37:55.555377928Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-534748\""
	Dec 10 06:37:55 functional-534748 containerd[5224]: time="2025-12-10T06:37:55.563505518Z" level=info msg="ImageCreate event name:\"sha256:54106a51504f7a89ca38a9b17f1e7c790a91bdd52bce5badc4621cab1917817f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:55 functional-534748 containerd[5224]: time="2025-12-10T06:37:55.563948461Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:56 functional-534748 containerd[5224]: time="2025-12-10T06:37:56.365118911Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 10 06:37:56 functional-534748 containerd[5224]: time="2025-12-10T06:37:56.367504054Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 10 06:37:56 functional-534748 containerd[5224]: time="2025-12-10T06:37:56.369566928Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 10 06:37:56 functional-534748 containerd[5224]: time="2025-12-10T06:37:56.381672073Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.262809286Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.265411703Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.267292798Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.275537420Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.410388053Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.412560664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.420398890Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.420731070Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.594360566Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.596511088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.604043292Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:37:57 functional-534748 containerd[5224]: time="2025-12-10T06:37:57.604379230Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:01.806964    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:01.807789    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:01.809356    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:01.809894    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:01.811459    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:38:01 up  5:20,  0 user,  load average: 0.48, 0.32, 0.78
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:37:58 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:37:59 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 10 06:37:59 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:59 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:37:59 functional-534748 kubelet[9203]: E1210 06:37:59.627881    9203 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:37:59 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:37:59 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:38:00 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 10 06:38:00 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:38:00 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:38:00 functional-534748 kubelet[9216]: E1210 06:38:00.372527    9216 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:38:00 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:38:00 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:38:01 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 10 06:38:01 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:38:01 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:38:01 functional-534748 kubelet[9244]: E1210 06:38:01.110228    9244 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:38:01 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:38:01 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:38:01 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 10 06:38:01 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:38:01 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:38:01 functional-534748 kubelet[9331]: E1210 06:38:01.859493    9331 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:38:01 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:38:01 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (376.845387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-534748 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 06:40:14.424112  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:42:35.782641  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:43:58.850630  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:45:14.428841  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:47:35.783265  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:50:14.428902  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-534748 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m13.058758091s)

                                                
                                                
-- stdout --
	* [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00070392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-534748 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m13.062942445s for "functional-534748" cluster.
I1210 06:50:16.009680  786751 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (335.027888ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-634209 image ls --format yaml --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ functional-634209 ssh pgrep buildkitd                                                                                                                   │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ image   │ functional-634209 image ls --format json --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr                                                  │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls --format table --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls                                                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p functional-634209                                                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ start   │ -p functional-534748 --alsologtostderr -v=8                                                                                                             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:31 UTC │                     │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:latest                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add minikube-local-cache-test:functional-534748                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache delete minikube-local-cache-test:functional-534748                                                                              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl images                                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	│ cache   │ functional-534748 cache reload                                                                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ kubectl │ functional-534748 kubectl -- --context functional-534748 get pods                                                                                       │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	│ start   │ -p functional-534748 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:38:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:38:02.996848  836363 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:38:02.996953  836363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:02.996957  836363 out.go:374] Setting ErrFile to fd 2...
	I1210 06:38:02.996961  836363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:02.997226  836363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:38:02.997576  836363 out.go:368] Setting JSON to false
	I1210 06:38:02.998612  836363 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19207,"bootTime":1765329476,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:38:02.998671  836363 start.go:143] virtualization:  
	I1210 06:38:03.004094  836363 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:38:03.007279  836363 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:38:03.007472  836363 notify.go:221] Checking for updates...
	I1210 06:38:03.013532  836363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:38:03.016433  836363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:38:03.019434  836363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:38:03.022270  836363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:38:03.025162  836363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:38:03.028574  836363 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:38:03.028673  836363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:38:03.063427  836363 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:38:03.063527  836363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:03.124292  836363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:38:03.114881143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:03.124387  836363 docker.go:319] overlay module found
	I1210 06:38:03.127603  836363 out.go:179] * Using the docker driver based on existing profile
	I1210 06:38:03.130606  836363 start.go:309] selected driver: docker
	I1210 06:38:03.130616  836363 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:03.130726  836363 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:38:03.130828  836363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:03.183470  836363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:38:03.17400928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:03.183897  836363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:38:03.183921  836363 cni.go:84] Creating CNI manager for ""
	I1210 06:38:03.183969  836363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:03.184018  836363 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:03.188981  836363 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:38:03.191768  836363 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:38:03.194630  836363 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:38:03.197557  836363 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:38:03.197592  836363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:38:03.197600  836363 cache.go:65] Caching tarball of preloaded images
	I1210 06:38:03.197644  836363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:38:03.197695  836363 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:38:03.197704  836363 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:38:03.197812  836363 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:38:03.219374  836363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:38:03.219395  836363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:38:03.219415  836363 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:38:03.219445  836363 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:03.219514  836363 start.go:364] duration metric: took 49.855µs to acquireMachinesLock for "functional-534748"
	I1210 06:38:03.219532  836363 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:38:03.219536  836363 fix.go:54] fixHost starting: 
	I1210 06:38:03.219816  836363 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:38:03.236144  836363 fix.go:112] recreateIfNeeded on functional-534748: state=Running err=<nil>
	W1210 06:38:03.236163  836363 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:38:03.239412  836363 out.go:252] * Updating the running docker "functional-534748" container ...
	I1210 06:38:03.239438  836363 machine.go:94] provisionDockerMachine start ...
	I1210 06:38:03.239539  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.255986  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.256288  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.256294  836363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:38:03.393920  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:38:03.393934  836363 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:38:03.393994  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.411659  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.411963  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.411982  836363 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:38:03.556341  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:38:03.556409  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.574119  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.574414  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.574427  836363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:38:03.711044  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:38:03.711071  836363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:38:03.711104  836363 ubuntu.go:190] setting up certificates
	I1210 06:38:03.711119  836363 provision.go:84] configureAuth start
	I1210 06:38:03.711202  836363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:38:03.730176  836363 provision.go:143] copyHostCerts
	I1210 06:38:03.730250  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:38:03.730257  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:38:03.730338  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:38:03.730431  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:38:03.730435  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:38:03.730459  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:38:03.730669  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:38:03.730673  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:38:03.730699  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:38:03.730787  836363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:38:03.830346  836363 provision.go:177] copyRemoteCerts
	I1210 06:38:03.830399  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:38:03.830448  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.847359  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:03.942214  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:38:03.959615  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:38:03.976341  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:38:03.993197  836363 provision.go:87] duration metric: took 282.055172ms to configureAuth
	I1210 06:38:03.993214  836363 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:38:03.993400  836363 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:38:03.993405  836363 machine.go:97] duration metric: took 753.963524ms to provisionDockerMachine
	I1210 06:38:03.993412  836363 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:38:03.993421  836363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:38:03.993478  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:38:03.993515  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.011825  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.110674  836363 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:38:04.114166  836363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:38:04.114184  836363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:38:04.114196  836363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:38:04.114252  836363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:38:04.114330  836363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:38:04.114407  836363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:38:04.114451  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:38:04.122085  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:38:04.140353  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:38:04.160314  836363 start.go:296] duration metric: took 166.888171ms for postStartSetup
	I1210 06:38:04.160387  836363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:38:04.160439  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.179224  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.271903  836363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:38:04.277112  836363 fix.go:56] duration metric: took 1.057568371s for fixHost
	I1210 06:38:04.277129  836363 start.go:83] releasing machines lock for "functional-534748", held for 1.057608798s
	I1210 06:38:04.277219  836363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:38:04.295104  836363 ssh_runner.go:195] Run: cat /version.json
	I1210 06:38:04.295130  836363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:38:04.295198  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.295203  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.320108  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.320646  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.418978  836363 ssh_runner.go:195] Run: systemctl --version
	I1210 06:38:04.509352  836363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:38:04.513794  836363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:38:04.513869  836363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:38:04.521471  836363 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:38:04.521486  836363 start.go:496] detecting cgroup driver to use...
	I1210 06:38:04.521523  836363 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:38:04.521580  836363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:38:04.537005  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:38:04.550809  836363 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:38:04.550892  836363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:38:04.567139  836363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:38:04.580704  836363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:38:04.697131  836363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:38:04.843057  836363 docker.go:234] disabling docker service ...
	I1210 06:38:04.843134  836363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:38:04.858243  836363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:38:04.871472  836363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:38:04.992555  836363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:38:05.113941  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:38:05.127335  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:38:05.141919  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:38:05.151900  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:38:05.161151  836363 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:38:05.161213  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:38:05.170764  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:05.180471  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:38:05.189238  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:05.197957  836363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:38:05.206107  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:38:05.215515  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:38:05.224555  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:38:05.233326  836363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:38:05.241235  836363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:38:05.248850  836363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:05.372410  836363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:38:05.513843  836363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:38:05.513915  836363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:38:05.519638  836363 start.go:564] Will wait 60s for crictl version
	I1210 06:38:05.519732  836363 ssh_runner.go:195] Run: which crictl
	I1210 06:38:05.524751  836363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:38:05.554788  836363 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:38:05.554852  836363 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:05.575345  836363 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:05.606405  836363 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:38:05.609314  836363 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:38:05.625429  836363 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:38:05.632180  836363 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:38:05.635024  836363 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:38:05.635199  836363 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:38:05.635275  836363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:05.663485  836363 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:38:05.663496  836363 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:38:05.663555  836363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:05.692188  836363 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:38:05.692214  836363 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:38:05.692220  836363 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:38:05.692316  836363 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:38:05.692382  836363 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:38:05.716412  836363 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:38:05.716430  836363 cni.go:84] Creating CNI manager for ""
	I1210 06:38:05.716438  836363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:05.716453  836363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:38:05.716479  836363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:38:05.716586  836363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:38:05.716652  836363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:38:05.724579  836363 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:38:05.724638  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:38:05.732044  836363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:38:05.744806  836363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:38:05.757235  836363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1210 06:38:05.769602  836363 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:38:05.773238  836363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:05.892525  836363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:38:06.296632  836363 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:38:06.296643  836363 certs.go:195] generating shared ca certs ...
	I1210 06:38:06.296658  836363 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:38:06.296809  836363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:38:06.296849  836363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:38:06.296855  836363 certs.go:257] generating profile certs ...
	I1210 06:38:06.296937  836363 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:38:06.297021  836363 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:38:06.297068  836363 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:38:06.297177  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:38:06.297208  836363 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:38:06.297216  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:38:06.297246  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:38:06.297268  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:38:06.297291  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:38:06.297337  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:38:06.297938  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:38:06.317159  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:38:06.336653  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:38:06.357682  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:38:06.376860  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:38:06.394800  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:38:06.412862  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:38:06.430175  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:38:06.447717  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:38:06.465124  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:38:06.482520  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:38:06.500341  836363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:38:06.513157  836363 ssh_runner.go:195] Run: openssl version
	I1210 06:38:06.519293  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.526724  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:38:06.534054  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.537762  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.537817  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.579287  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:38:06.586741  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.593909  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:38:06.601430  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.605107  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.605174  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.646057  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:38:06.653276  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.660757  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:38:06.668784  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.672757  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.672825  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.713985  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:38:06.721257  836363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:38:06.724932  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:38:06.765952  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:38:06.807038  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:38:06.847752  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:38:06.890289  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:38:06.933893  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:38:06.976437  836363 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:06.976545  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:38:06.976606  836363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:07.011412  836363 cri.go:89] found id: ""
	I1210 06:38:07.011470  836363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:38:07.019342  836363 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:38:07.019351  836363 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:38:07.019420  836363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:38:07.026888  836363 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.027424  836363 kubeconfig.go:125] found "functional-534748" server: "https://192.168.49.2:8441"
	I1210 06:38:07.028660  836363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:38:07.037364  836363 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 06:23:31.333930823 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:38:05.762986837 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:38:07.037389  836363 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:38:07.037401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 06:38:07.037465  836363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:07.075015  836363 cri.go:89] found id: ""
	I1210 06:38:07.075109  836363 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:38:07.098429  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:07.106312  836363 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 06:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 06:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 06:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 10 06:27 /etc/kubernetes/scheduler.conf
	
	I1210 06:38:07.106367  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:38:07.114107  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:38:07.122067  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.122121  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:38:07.130176  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:38:07.138001  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.138055  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:07.145554  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:38:07.153390  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.153446  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:07.160768  836363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:07.168493  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:07.213471  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.026655  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.236384  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.298826  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.351741  836363 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:38:08.351821  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:08.852713  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.352205  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.852735  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:10.352309  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:10.851981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:11.352872  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:11.852826  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.352059  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.852894  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:13.352052  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:13.851883  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:14.351956  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:14.852642  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.352606  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.852015  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:16.352784  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:16.852024  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:17.351924  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:17.852941  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.352970  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.852320  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:19.352100  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:19.852911  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:20.352224  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:20.852520  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.352048  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.851954  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:22.352639  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:22.852718  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:23.352574  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:23.851998  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.352693  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.851979  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:25.352948  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:25.852529  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:26.351982  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:26.852421  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.352059  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.851955  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:28.351909  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:28.852783  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.352790  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.852562  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:30.352816  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:30.852170  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:31.352863  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:31.852962  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:32.351970  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:32.852604  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.352940  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.852377  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:34.352015  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:34.852768  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:35.352496  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:35.852012  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.351968  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.852867  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:37.351948  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:37.852026  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.351985  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.852728  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:39.351971  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:39.852981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:40.352705  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:40.852754  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:41.352353  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:41.852845  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.352945  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.852200  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.352581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.851999  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.352537  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.852152  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.352051  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.852697  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.352700  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.852741  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.351895  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.852042  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.352023  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.852686  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.352818  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.852006  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.354621  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.852814  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.352669  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.852320  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.352933  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.852726  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.352653  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.852593  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.352710  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.852520  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.352259  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.851929  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.352781  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.852568  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.352484  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.852171  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.352010  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.852803  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.352685  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.852017  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.353581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.852809  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.352585  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.852755  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.351981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.851998  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.352045  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.851906  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.352316  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.852592  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.351976  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.852799  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.351972  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.852965  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.351946  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.852642  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:08.352868  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:08.352944  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:08.381205  836363 cri.go:89] found id: ""
	I1210 06:39:08.381219  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.381227  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:08.381232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:08.381288  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:08.404633  836363 cri.go:89] found id: ""
	I1210 06:39:08.404646  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.404654  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:08.404659  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:08.404721  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:08.428513  836363 cri.go:89] found id: ""
	I1210 06:39:08.428527  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.428534  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:08.428546  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:08.428606  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:08.453023  836363 cri.go:89] found id: ""
	I1210 06:39:08.453036  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.453043  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:08.453049  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:08.453105  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:08.481527  836363 cri.go:89] found id: ""
	I1210 06:39:08.481540  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.481547  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:08.481552  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:08.481609  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:08.506550  836363 cri.go:89] found id: ""
	I1210 06:39:08.506565  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.506580  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:08.506585  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:08.506649  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:08.531724  836363 cri.go:89] found id: ""
	I1210 06:39:08.531738  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.531745  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:08.531752  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:08.531763  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:08.571815  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:08.571832  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:08.630094  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:08.630112  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:08.647317  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:08.647335  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:08.715592  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:08.706872   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.707541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709199   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709733   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.711367   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:08.706872   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.707541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709199   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709733   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.711367   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:08.715603  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:08.715614  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:11.280652  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:11.290422  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:11.290516  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:11.314331  836363 cri.go:89] found id: ""
	I1210 06:39:11.314345  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.314352  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:11.314357  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:11.314419  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:11.337726  836363 cri.go:89] found id: ""
	I1210 06:39:11.337741  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.337747  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:11.337752  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:11.337812  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:11.365800  836363 cri.go:89] found id: ""
	I1210 06:39:11.365815  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.365821  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:11.365826  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:11.365886  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:11.394804  836363 cri.go:89] found id: ""
	I1210 06:39:11.394818  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.394825  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:11.394830  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:11.394887  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:11.419726  836363 cri.go:89] found id: ""
	I1210 06:39:11.419740  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.419746  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:11.419751  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:11.419810  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:11.445533  836363 cri.go:89] found id: ""
	I1210 06:39:11.445547  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.445554  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:11.445560  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:11.445618  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:11.470212  836363 cri.go:89] found id: ""
	I1210 06:39:11.470227  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.470233  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:11.470241  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:11.470251  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:11.529183  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:11.529202  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:11.546384  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:11.546400  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:11.640312  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:11.631803   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.632307   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.633874   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635142   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635867   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:11.631803   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.632307   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.633874   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635142   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635867   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:11.640322  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:11.640333  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:11.703828  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:11.703850  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:14.230665  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:14.241121  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:14.241183  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:14.268951  836363 cri.go:89] found id: ""
	I1210 06:39:14.268964  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.268974  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:14.268979  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:14.269035  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:14.292742  836363 cri.go:89] found id: ""
	I1210 06:39:14.292761  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.292768  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:14.292773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:14.292838  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:14.317527  836363 cri.go:89] found id: ""
	I1210 06:39:14.317540  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.317547  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:14.317552  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:14.317609  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:14.344738  836363 cri.go:89] found id: ""
	I1210 06:39:14.344751  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.344758  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:14.344764  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:14.344822  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:14.369086  836363 cri.go:89] found id: ""
	I1210 06:39:14.369101  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.369108  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:14.369114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:14.369172  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:14.393919  836363 cri.go:89] found id: ""
	I1210 06:39:14.393932  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.393938  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:14.393943  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:14.394005  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:14.418228  836363 cri.go:89] found id: ""
	I1210 06:39:14.418242  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.418249  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:14.418257  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:14.418267  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:14.481544  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:14.481564  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:14.509051  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:14.509072  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:14.574238  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:14.574259  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:14.594306  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:14.594323  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:14.659264  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:14.651062   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.651501   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653284   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653744   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.655187   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:14.651062   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.651501   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653284   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653744   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.655187   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:17.159960  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:17.169978  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:17.170036  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:17.194333  836363 cri.go:89] found id: ""
	I1210 06:39:17.194347  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.194354  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:17.194359  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:17.194418  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:17.218507  836363 cri.go:89] found id: ""
	I1210 06:39:17.218521  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.218528  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:17.218533  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:17.218617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:17.243499  836363 cri.go:89] found id: ""
	I1210 06:39:17.243513  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.243521  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:17.243527  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:17.243585  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:17.271019  836363 cri.go:89] found id: ""
	I1210 06:39:17.271034  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.271041  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:17.271048  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:17.271106  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:17.296491  836363 cri.go:89] found id: ""
	I1210 06:39:17.296506  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.296513  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:17.296517  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:17.296574  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:17.327127  836363 cri.go:89] found id: ""
	I1210 06:39:17.327142  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.327149  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:17.327156  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:17.327214  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:17.351001  836363 cri.go:89] found id: ""
	I1210 06:39:17.351016  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.351023  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:17.351031  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:17.351046  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:17.408952  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:17.408971  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:17.425660  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:17.425676  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:17.495167  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:17.486424   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.487213   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.488883   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.489501   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.491179   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:17.486424   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.487213   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.488883   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.489501   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.491179   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:17.495179  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:17.495190  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:17.562848  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:17.562868  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:20.100845  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:20.111238  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:20.111303  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:20.135715  836363 cri.go:89] found id: ""
	I1210 06:39:20.135730  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.135737  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:20.135742  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:20.135849  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:20.162728  836363 cri.go:89] found id: ""
	I1210 06:39:20.162742  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.162750  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:20.162754  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:20.162817  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:20.186896  836363 cri.go:89] found id: ""
	I1210 06:39:20.186910  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.186918  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:20.186923  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:20.187033  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:20.211401  836363 cri.go:89] found id: ""
	I1210 06:39:20.211416  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.211423  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:20.211428  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:20.211494  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:20.241049  836363 cri.go:89] found id: ""
	I1210 06:39:20.241063  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.241071  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:20.241075  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:20.241136  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:20.264812  836363 cri.go:89] found id: ""
	I1210 06:39:20.264826  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.264833  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:20.264839  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:20.264905  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:20.289153  836363 cri.go:89] found id: ""
	I1210 06:39:20.289167  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.289179  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:20.289187  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:20.289198  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:20.305825  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:20.305841  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:20.372702  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:20.364207   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.364892   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.366572   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.367140   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.368841   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:20.364207   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.364892   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.366572   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.367140   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.368841   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:20.372716  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:20.372727  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:20.434137  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:20.434156  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:20.462784  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:20.462801  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:23.020338  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:23.033250  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:23.033312  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:23.057227  836363 cri.go:89] found id: ""
	I1210 06:39:23.057241  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.057247  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:23.057252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:23.057310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:23.082261  836363 cri.go:89] found id: ""
	I1210 06:39:23.082275  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.082282  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:23.082287  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:23.082346  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:23.106424  836363 cri.go:89] found id: ""
	I1210 06:39:23.106438  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.106445  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:23.106451  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:23.106554  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:23.132399  836363 cri.go:89] found id: ""
	I1210 06:39:23.132414  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.132429  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:23.132435  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:23.132492  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:23.162454  836363 cri.go:89] found id: ""
	I1210 06:39:23.162494  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.162501  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:23.162507  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:23.162581  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:23.187219  836363 cri.go:89] found id: ""
	I1210 06:39:23.187233  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.187240  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:23.187245  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:23.187310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:23.212781  836363 cri.go:89] found id: ""
	I1210 06:39:23.212795  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.212802  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:23.212809  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:23.212821  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:23.269301  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:23.269321  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:23.286019  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:23.286034  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:23.349588  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:23.342068   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.342600   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344048   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344478   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.345899   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:23.342068   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.342600   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344048   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344478   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.345899   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:23.349598  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:23.349608  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:23.410637  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:23.410657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:25.946659  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:25.956427  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:25.956484  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:25.980198  836363 cri.go:89] found id: ""
	I1210 06:39:25.980212  836363 logs.go:282] 0 containers: []
	W1210 06:39:25.980219  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:25.980224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:25.980282  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:26.007385  836363 cri.go:89] found id: ""
	I1210 06:39:26.007400  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.007408  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:26.007413  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:26.007504  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:26.036729  836363 cri.go:89] found id: ""
	I1210 06:39:26.036743  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.036750  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:26.036755  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:26.036816  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:26.062224  836363 cri.go:89] found id: ""
	I1210 06:39:26.062238  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.062245  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:26.062250  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:26.062310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:26.087647  836363 cri.go:89] found id: ""
	I1210 06:39:26.087661  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.087668  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:26.087682  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:26.087742  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:26.111730  836363 cri.go:89] found id: ""
	I1210 06:39:26.111744  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.111751  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:26.111756  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:26.111815  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:26.140490  836363 cri.go:89] found id: ""
	I1210 06:39:26.140504  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.140511  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:26.140525  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:26.140534  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:26.196200  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:26.196219  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:26.212571  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:26.212587  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:26.273577  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:26.265176   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.265699   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267151   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267679   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.269363   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:26.265176   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.265699   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267151   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267679   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.269363   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:26.273590  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:26.273603  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:26.335078  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:26.335098  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:28.869553  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:28.880899  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:28.880964  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:28.906428  836363 cri.go:89] found id: ""
	I1210 06:39:28.906442  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.906449  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:28.906454  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:28.906544  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:28.931886  836363 cri.go:89] found id: ""
	I1210 06:39:28.931900  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.931908  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:28.931912  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:28.931973  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:28.961315  836363 cri.go:89] found id: ""
	I1210 06:39:28.961329  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.961336  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:28.961340  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:28.961401  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:28.986397  836363 cri.go:89] found id: ""
	I1210 06:39:28.986411  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.986419  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:28.986425  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:28.986507  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:29.012532  836363 cri.go:89] found id: ""
	I1210 06:39:29.012546  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.012554  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:29.012559  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:29.012617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:29.041722  836363 cri.go:89] found id: ""
	I1210 06:39:29.041736  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.041744  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:29.041749  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:29.041810  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:29.067638  836363 cri.go:89] found id: ""
	I1210 06:39:29.067652  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.067660  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:29.067675  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:29.067686  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:29.123932  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:29.123951  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:29.140346  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:29.140363  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:29.205033  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:29.196885   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.197511   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199079   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199683   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.201215   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:29.196885   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.197511   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199079   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199683   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.201215   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:29.205044  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:29.205056  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:29.268564  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:29.268592  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:31.797415  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:31.810439  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:31.810560  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:31.839718  836363 cri.go:89] found id: ""
	I1210 06:39:31.839731  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.839738  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:31.839743  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:31.839812  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:31.866887  836363 cri.go:89] found id: ""
	I1210 06:39:31.866901  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.866908  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:31.866913  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:31.866971  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:31.896088  836363 cri.go:89] found id: ""
	I1210 06:39:31.896102  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.896109  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:31.896114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:31.896183  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:31.920769  836363 cri.go:89] found id: ""
	I1210 06:39:31.920783  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.920790  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:31.920804  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:31.920870  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:31.944941  836363 cri.go:89] found id: ""
	I1210 06:39:31.944955  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.944973  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:31.944979  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:31.945062  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:31.969699  836363 cri.go:89] found id: ""
	I1210 06:39:31.969713  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.969719  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:31.969734  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:31.969796  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:31.994263  836363 cri.go:89] found id: ""
	I1210 06:39:31.994288  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.994296  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:31.994305  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:31.994315  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:32.051337  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:32.051358  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:32.068506  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:32.068524  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:32.133010  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:32.124121   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.124862   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.126702   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.127174   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.128721   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:32.124121   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.124862   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.126702   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.127174   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.128721   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:32.133022  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:32.133032  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:32.195411  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:32.195432  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:34.725830  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:34.736154  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:34.736227  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:34.760592  836363 cri.go:89] found id: ""
	I1210 06:39:34.760606  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.760613  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:34.760618  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:34.760679  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:34.789194  836363 cri.go:89] found id: ""
	I1210 06:39:34.789208  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.789215  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:34.789220  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:34.789290  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:34.821768  836363 cri.go:89] found id: ""
	I1210 06:39:34.821783  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.821798  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:34.821804  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:34.821862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:34.851156  836363 cri.go:89] found id: ""
	I1210 06:39:34.851182  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.851190  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:34.851195  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:34.851262  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:34.881339  836363 cri.go:89] found id: ""
	I1210 06:39:34.881353  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.881361  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:34.881366  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:34.881439  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:34.906857  836363 cri.go:89] found id: ""
	I1210 06:39:34.906871  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.906878  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:34.906884  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:34.906950  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:34.935793  836363 cri.go:89] found id: ""
	I1210 06:39:34.935807  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.935814  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:34.935822  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:34.935832  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:34.993322  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:34.993345  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:35.011292  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:35.011309  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:35.078043  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:35.069050   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070041   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070728   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.072495   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.073080   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:35.069050   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070041   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070728   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.072495   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.073080   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:35.078052  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:35.078063  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:35.146644  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:35.146671  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:37.678658  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:37.688848  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:37.688925  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:37.713621  836363 cri.go:89] found id: ""
	I1210 06:39:37.713635  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.713642  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:37.713647  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:37.713706  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:37.738638  836363 cri.go:89] found id: ""
	I1210 06:39:37.738651  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.738658  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:37.738663  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:37.738728  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:37.767364  836363 cri.go:89] found id: ""
	I1210 06:39:37.767378  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.767385  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:37.767390  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:37.767446  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:37.804827  836363 cri.go:89] found id: ""
	I1210 06:39:37.804841  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.804848  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:37.804854  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:37.804911  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:37.830424  836363 cri.go:89] found id: ""
	I1210 06:39:37.830438  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.830445  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:37.830449  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:37.830529  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:37.862851  836363 cri.go:89] found id: ""
	I1210 06:39:37.862864  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.862871  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:37.862876  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:37.862933  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:37.887629  836363 cri.go:89] found id: ""
	I1210 06:39:37.887643  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.887650  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:37.887686  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:37.887698  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:37.946033  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:37.946053  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:37.962951  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:37.962969  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:38.030263  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:38.021061   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.021797   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.022740   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.024684   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.025056   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:38.021061   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.021797   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.022740   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.024684   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.025056   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:38.030274  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:38.030285  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:38.093462  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:38.093482  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:40.622687  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:40.632840  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:40.632902  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:40.657235  836363 cri.go:89] found id: ""
	I1210 06:39:40.657248  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.657255  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:40.657261  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:40.657320  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:40.681835  836363 cri.go:89] found id: ""
	I1210 06:39:40.681849  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.681857  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:40.681862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:40.681919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:40.708085  836363 cri.go:89] found id: ""
	I1210 06:39:40.708099  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.708106  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:40.708111  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:40.708172  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:40.734852  836363 cri.go:89] found id: ""
	I1210 06:39:40.734867  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.734874  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:40.734879  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:40.734937  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:40.760765  836363 cri.go:89] found id: ""
	I1210 06:39:40.760779  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.760786  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:40.760791  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:40.760862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:40.785777  836363 cri.go:89] found id: ""
	I1210 06:39:40.785791  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.785797  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:40.785802  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:40.785862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:40.812943  836363 cri.go:89] found id: ""
	I1210 06:39:40.812957  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.812963  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:40.812971  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:40.812981  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:40.882713  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:40.874213   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.874907   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876393   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876781   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.878311   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:40.874213   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.874907   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876393   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876781   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.878311   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:40.882724  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:40.882746  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:40.946502  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:40.946522  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:40.973695  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:40.973711  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:41.028086  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:41.028105  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:43.544743  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:43.554582  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:43.554639  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:43.578394  836363 cri.go:89] found id: ""
	I1210 06:39:43.578408  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.578415  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:43.578421  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:43.578501  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:43.602120  836363 cri.go:89] found id: ""
	I1210 06:39:43.602134  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.602141  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:43.602152  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:43.602211  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:43.626641  836363 cri.go:89] found id: ""
	I1210 06:39:43.626655  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.626662  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:43.626666  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:43.626730  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:43.650792  836363 cri.go:89] found id: ""
	I1210 06:39:43.650805  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.650812  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:43.650817  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:43.650875  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:43.676181  836363 cri.go:89] found id: ""
	I1210 06:39:43.676195  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.676201  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:43.676207  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:43.676264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:43.700288  836363 cri.go:89] found id: ""
	I1210 06:39:43.700301  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.700308  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:43.700317  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:43.700376  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:43.723140  836363 cri.go:89] found id: ""
	I1210 06:39:43.723154  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.723161  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:43.723169  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:43.723179  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:43.777323  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:43.777344  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:43.793764  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:43.793781  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:43.876520  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:43.868105   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.868820   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870334   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870859   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.872328   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:43.868105   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.868820   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870334   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870859   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.872328   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:43.876531  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:43.876546  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:43.937962  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:43.937982  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:46.471232  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:46.481349  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:46.481414  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:46.505604  836363 cri.go:89] found id: ""
	I1210 06:39:46.505618  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.505625  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:46.505631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:46.505693  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:46.530584  836363 cri.go:89] found id: ""
	I1210 06:39:46.530598  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.530605  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:46.530610  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:46.530667  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:46.555675  836363 cri.go:89] found id: ""
	I1210 06:39:46.555689  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.555696  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:46.555701  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:46.555758  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:46.579225  836363 cri.go:89] found id: ""
	I1210 06:39:46.579240  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.579246  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:46.579252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:46.579309  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:46.603318  836363 cri.go:89] found id: ""
	I1210 06:39:46.603332  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.603339  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:46.603344  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:46.603400  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:46.628198  836363 cri.go:89] found id: ""
	I1210 06:39:46.628212  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.628219  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:46.628224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:46.628280  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:46.651425  836363 cri.go:89] found id: ""
	I1210 06:39:46.651439  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.651446  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:46.651454  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:46.651464  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:46.706345  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:46.706364  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:46.722718  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:46.722733  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:46.788441  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:46.780334   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.780989   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.782563   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.783115   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.784714   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:46.780334   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.780989   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.782563   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.783115   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.784714   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:46.788461  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:46.788474  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:46.856250  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:46.856269  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:49.385907  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:49.395772  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:49.395833  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:49.419273  836363 cri.go:89] found id: ""
	I1210 06:39:49.419286  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.419294  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:49.419299  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:49.419357  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:49.444546  836363 cri.go:89] found id: ""
	I1210 06:39:49.444560  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.444567  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:49.444572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:49.444634  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:49.469099  836363 cri.go:89] found id: ""
	I1210 06:39:49.469113  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.469120  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:49.469125  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:49.469182  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:49.497447  836363 cri.go:89] found id: ""
	I1210 06:39:49.497461  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.497468  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:49.497473  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:49.497531  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:49.521614  836363 cri.go:89] found id: ""
	I1210 06:39:49.521628  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.521635  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:49.521640  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:49.521700  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:49.546324  836363 cri.go:89] found id: ""
	I1210 06:39:49.546338  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.546345  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:49.546351  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:49.546408  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:49.569503  836363 cri.go:89] found id: ""
	I1210 06:39:49.569516  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.569523  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:49.569531  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:49.569541  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:49.625182  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:49.625201  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:49.641754  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:49.641772  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:49.705447  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:49.697491   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.698134   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.699780   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.700234   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.701724   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:49.697491   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.698134   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.699780   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.700234   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.701724   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:49.705457  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:49.705478  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:49.766615  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:49.766634  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:52.302628  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:52.312769  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:52.312832  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:52.338228  836363 cri.go:89] found id: ""
	I1210 06:39:52.338242  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.338249  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:52.338254  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:52.338315  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:52.363997  836363 cri.go:89] found id: ""
	I1210 06:39:52.364011  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.364018  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:52.364024  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:52.364083  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:52.389867  836363 cri.go:89] found id: ""
	I1210 06:39:52.389881  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.389888  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:52.389894  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:52.389959  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:52.416171  836363 cri.go:89] found id: ""
	I1210 06:39:52.416186  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.416193  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:52.416199  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:52.416262  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:52.440036  836363 cri.go:89] found id: ""
	I1210 06:39:52.440051  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.440058  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:52.440064  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:52.440127  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:52.465173  836363 cri.go:89] found id: ""
	I1210 06:39:52.465188  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.465195  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:52.465200  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:52.465266  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:52.490275  836363 cri.go:89] found id: ""
	I1210 06:39:52.490289  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.490296  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:52.490304  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:52.490316  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:52.507524  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:52.507541  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:52.572947  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:52.565302   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.565716   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567214   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567524   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.569003   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:52.565302   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.565716   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567214   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567524   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.569003   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:52.572957  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:52.572967  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:52.639898  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:52.639920  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:52.671836  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:52.671853  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:55.228555  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:55.238632  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:55.238692  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:55.262819  836363 cri.go:89] found id: ""
	I1210 06:39:55.262833  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.262840  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:55.262845  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:55.262903  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:55.287262  836363 cri.go:89] found id: ""
	I1210 06:39:55.287276  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.287282  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:55.287287  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:55.287347  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:55.312064  836363 cri.go:89] found id: ""
	I1210 06:39:55.312077  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.312084  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:55.312089  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:55.312147  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:55.340546  836363 cri.go:89] found id: ""
	I1210 06:39:55.340560  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.340566  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:55.340572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:55.340638  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:55.369203  836363 cri.go:89] found id: ""
	I1210 06:39:55.369217  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.369224  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:55.369229  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:55.369294  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:55.394186  836363 cri.go:89] found id: ""
	I1210 06:39:55.394200  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.394213  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:55.394218  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:55.394275  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:55.418250  836363 cri.go:89] found id: ""
	I1210 06:39:55.418264  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.418271  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:55.418279  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:55.418293  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:55.449481  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:55.449497  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:55.505651  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:55.505670  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:55.522722  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:55.522739  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:55.595372  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:55.580192   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.580773   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.588978   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.589842   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.591512   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:55.580192   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.580773   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.588978   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.589842   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.591512   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:55.595383  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:55.595396  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:58.156956  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:58.167095  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:58.167157  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:58.191075  836363 cri.go:89] found id: ""
	I1210 06:39:58.191089  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.191096  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:58.191101  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:58.191161  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:58.219145  836363 cri.go:89] found id: ""
	I1210 06:39:58.219159  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.219166  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:58.219171  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:58.219230  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:58.243820  836363 cri.go:89] found id: ""
	I1210 06:39:58.243834  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.243841  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:58.243846  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:58.243903  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:58.273220  836363 cri.go:89] found id: ""
	I1210 06:39:58.273234  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.273241  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:58.273246  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:58.273306  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:58.296744  836363 cri.go:89] found id: ""
	I1210 06:39:58.296758  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.296765  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:58.296770  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:58.296826  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:58.321374  836363 cri.go:89] found id: ""
	I1210 06:39:58.321389  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.321395  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:58.321401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:58.321460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:58.345587  836363 cri.go:89] found id: ""
	I1210 06:39:58.345601  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.345607  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:58.345615  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:58.345626  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:58.363238  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:58.363255  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:58.430409  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:58.422109   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.422784   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.424524   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.425019   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.426627   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:58.422109   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.422784   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.424524   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.425019   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.426627   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:58.430420  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:58.430439  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:58.492984  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:58.493002  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:58.520139  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:58.520155  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:01.076701  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:01.088176  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:01.088237  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:01.115625  836363 cri.go:89] found id: ""
	I1210 06:40:01.115641  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.115648  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:01.115653  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:01.115713  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:01.142756  836363 cri.go:89] found id: ""
	I1210 06:40:01.142771  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.142779  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:01.142784  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:01.142854  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:01.174021  836363 cri.go:89] found id: ""
	I1210 06:40:01.174036  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.174043  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:01.174048  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:01.174115  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:01.200639  836363 cri.go:89] found id: ""
	I1210 06:40:01.200654  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.200661  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:01.200667  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:01.200729  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:01.225759  836363 cri.go:89] found id: ""
	I1210 06:40:01.225772  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.225779  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:01.225785  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:01.225851  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:01.250911  836363 cri.go:89] found id: ""
	I1210 06:40:01.250926  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.250934  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:01.250940  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:01.251003  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:01.279325  836363 cri.go:89] found id: ""
	I1210 06:40:01.279339  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.279347  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:01.279355  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:01.279366  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:01.335352  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:01.335371  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:01.352578  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:01.352596  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:01.422752  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:01.414308   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.415520   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417210   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417554   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.418810   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:01.414308   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.415520   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417210   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417554   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.418810   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:01.422763  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:01.422778  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:01.484637  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:01.484658  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:04.016723  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:04.027134  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:04.027199  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:04.058110  836363 cri.go:89] found id: ""
	I1210 06:40:04.058123  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.058131  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:04.058136  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:04.058194  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:04.085839  836363 cri.go:89] found id: ""
	I1210 06:40:04.085853  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.085859  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:04.085874  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:04.085938  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:04.112846  836363 cri.go:89] found id: ""
	I1210 06:40:04.112870  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.112877  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:04.112884  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:04.112952  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:04.144605  836363 cri.go:89] found id: ""
	I1210 06:40:04.144619  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.144626  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:04.144631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:04.144698  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:04.170078  836363 cri.go:89] found id: ""
	I1210 06:40:04.170093  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.170111  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:04.170116  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:04.170187  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:04.195493  836363 cri.go:89] found id: ""
	I1210 06:40:04.195560  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.195568  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:04.195573  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:04.195663  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:04.224488  836363 cri.go:89] found id: ""
	I1210 06:40:04.224502  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.224509  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:04.224518  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:04.224528  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:04.280631  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:04.280651  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:04.297645  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:04.297663  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:04.366830  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:04.356738   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.357139   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.358834   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.359561   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.361366   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:04.356738   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.357139   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.358834   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.359561   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.361366   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:04.366842  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:04.366854  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:04.430241  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:04.430260  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:06.963156  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:06.973415  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:06.973480  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:06.997210  836363 cri.go:89] found id: ""
	I1210 06:40:06.997223  836363 logs.go:282] 0 containers: []
	W1210 06:40:06.997230  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:06.997235  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:06.997292  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:07.024360  836363 cri.go:89] found id: ""
	I1210 06:40:07.024374  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.024381  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:07.024386  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:07.024443  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:07.056844  836363 cri.go:89] found id: ""
	I1210 06:40:07.056857  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.056864  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:07.056869  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:07.056926  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:07.095983  836363 cri.go:89] found id: ""
	I1210 06:40:07.095997  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.096004  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:07.096010  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:07.096080  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:07.126932  836363 cri.go:89] found id: ""
	I1210 06:40:07.126947  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.126954  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:07.126958  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:07.127020  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:07.151807  836363 cri.go:89] found id: ""
	I1210 06:40:07.151823  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.151831  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:07.151835  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:07.151895  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:07.175459  836363 cri.go:89] found id: ""
	I1210 06:40:07.175473  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.175480  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:07.175489  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:07.175499  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:07.229963  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:07.229984  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:07.249632  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:07.249654  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:07.314011  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:07.306140   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.306891   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308497   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308812   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.310262   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:07.306140   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.306891   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308497   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308812   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.310262   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:07.314022  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:07.314034  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:07.376148  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:07.376173  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:09.907917  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:09.918267  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:09.918339  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:09.946634  836363 cri.go:89] found id: ""
	I1210 06:40:09.946648  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.946654  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:09.946660  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:09.946729  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:09.971532  836363 cri.go:89] found id: ""
	I1210 06:40:09.971546  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.971553  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:09.971558  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:09.971633  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:09.995748  836363 cri.go:89] found id: ""
	I1210 06:40:09.995762  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.995768  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:09.995773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:09.995832  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:10.026807  836363 cri.go:89] found id: ""
	I1210 06:40:10.026821  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.026828  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:10.026834  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:10.026902  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:10.060800  836363 cri.go:89] found id: ""
	I1210 06:40:10.060815  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.060822  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:10.060831  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:10.060896  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:10.092175  836363 cri.go:89] found id: ""
	I1210 06:40:10.092190  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.092200  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:10.092205  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:10.092267  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:10.121165  836363 cri.go:89] found id: ""
	I1210 06:40:10.121179  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.121187  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:10.121197  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:10.121208  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:10.137742  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:10.137761  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:10.202959  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:10.194457   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.195204   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.196954   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.197466   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.199061   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:10.194457   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.195204   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.196954   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.197466   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.199061   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:10.202970  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:10.202993  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:10.263838  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:10.263860  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:10.290431  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:10.290450  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:12.845609  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:12.856045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:12.856108  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:12.881725  836363 cri.go:89] found id: ""
	I1210 06:40:12.881740  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.881756  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:12.881762  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:12.881836  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:12.905554  836363 cri.go:89] found id: ""
	I1210 06:40:12.905568  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.905575  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:12.905580  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:12.905636  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:12.929343  836363 cri.go:89] found id: ""
	I1210 06:40:12.929357  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.929363  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:12.929369  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:12.929427  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:12.958063  836363 cri.go:89] found id: ""
	I1210 06:40:12.958077  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.958083  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:12.958089  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:12.958153  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:12.982226  836363 cri.go:89] found id: ""
	I1210 06:40:12.982240  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.982247  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:12.982252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:12.982309  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:13.008275  836363 cri.go:89] found id: ""
	I1210 06:40:13.008296  836363 logs.go:282] 0 containers: []
	W1210 06:40:13.008304  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:13.008309  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:13.008376  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:13.032141  836363 cri.go:89] found id: ""
	I1210 06:40:13.032155  836363 logs.go:282] 0 containers: []
	W1210 06:40:13.032161  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:13.032169  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:13.032180  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:13.094529  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:13.094550  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:13.112774  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:13.112794  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:13.177133  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:13.169073   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.169669   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171355   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171773   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.173232   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:13.169073   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.169669   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171355   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171773   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.173232   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:13.177142  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:13.177157  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:13.237784  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:13.237804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:15.773100  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:15.783808  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:15.783870  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:15.808779  836363 cri.go:89] found id: ""
	I1210 06:40:15.808792  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.808799  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:15.808811  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:15.808873  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:15.835122  836363 cri.go:89] found id: ""
	I1210 06:40:15.835136  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.835143  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:15.835147  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:15.835205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:15.859608  836363 cri.go:89] found id: ""
	I1210 06:40:15.859622  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.859630  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:15.859635  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:15.859698  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:15.884617  836363 cri.go:89] found id: ""
	I1210 06:40:15.884631  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.884637  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:15.884648  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:15.884708  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:15.917645  836363 cri.go:89] found id: ""
	I1210 06:40:15.917659  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.917666  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:15.917671  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:15.917738  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:15.942216  836363 cri.go:89] found id: ""
	I1210 06:40:15.942230  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.942237  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:15.942246  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:15.942306  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:15.969023  836363 cri.go:89] found id: ""
	I1210 06:40:15.969038  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.969045  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:15.969053  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:15.969065  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:16.025303  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:16.025322  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:16.043036  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:16.043055  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:16.124792  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:16.116191   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.116732   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118445   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118910   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.120594   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:16.116191   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.116732   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118445   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118910   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.120594   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:16.124803  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:16.124829  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:16.187018  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:16.187038  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:18.721268  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:18.732117  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:18.732179  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:18.759703  836363 cri.go:89] found id: ""
	I1210 06:40:18.759717  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.759724  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:18.759729  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:18.759803  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:18.785469  836363 cri.go:89] found id: ""
	I1210 06:40:18.785482  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.785492  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:18.785497  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:18.785556  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:18.809013  836363 cri.go:89] found id: ""
	I1210 06:40:18.809026  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.809033  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:18.809038  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:18.809100  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:18.837693  836363 cri.go:89] found id: ""
	I1210 06:40:18.837707  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.837714  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:18.837719  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:18.837777  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:18.862280  836363 cri.go:89] found id: ""
	I1210 06:40:18.862294  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.862300  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:18.862306  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:18.862366  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:18.887552  836363 cri.go:89] found id: ""
	I1210 06:40:18.887566  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.887573  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:18.887578  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:18.887644  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:18.912972  836363 cri.go:89] found id: ""
	I1210 06:40:18.912987  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.912994  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:18.913002  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:18.913020  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:18.968777  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:18.968818  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:18.987249  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:18.987267  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:19.053510  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:19.044510   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.045135   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047145   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047494   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.049061   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:19.044510   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.045135   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047145   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047494   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.049061   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:19.053536  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:19.053548  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:19.127699  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:19.127719  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:21.655771  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:21.665930  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:21.665996  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:21.690403  836363 cri.go:89] found id: ""
	I1210 06:40:21.690417  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.690424  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:21.690429  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:21.690526  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:21.716021  836363 cri.go:89] found id: ""
	I1210 06:40:21.716035  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.716042  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:21.716047  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:21.716110  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:21.740524  836363 cri.go:89] found id: ""
	I1210 06:40:21.740538  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.740545  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:21.740551  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:21.740610  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:21.764686  836363 cri.go:89] found id: ""
	I1210 06:40:21.764699  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.764706  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:21.764711  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:21.764768  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:21.789476  836363 cri.go:89] found id: ""
	I1210 06:40:21.789490  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.789497  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:21.789502  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:21.789567  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:21.815957  836363 cri.go:89] found id: ""
	I1210 06:40:21.815973  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.815981  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:21.815986  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:21.816046  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:21.844568  836363 cri.go:89] found id: ""
	I1210 06:40:21.844582  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.844589  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:21.844597  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:21.844607  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:21.900940  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:21.900960  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:21.919059  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:21.919078  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:21.988088  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:21.979038   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.979792   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981425   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981947   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.983526   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:21.979038   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.979792   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981425   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981947   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.983526   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:21.988098  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:21.988109  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:22.051814  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:22.051834  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:24.585034  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:24.595723  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:24.595789  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:24.624873  836363 cri.go:89] found id: ""
	I1210 06:40:24.624888  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.624895  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:24.624900  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:24.624966  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:24.649543  836363 cri.go:89] found id: ""
	I1210 06:40:24.649557  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.649564  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:24.649570  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:24.649680  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:24.675056  836363 cri.go:89] found id: ""
	I1210 06:40:24.675080  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.675088  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:24.675093  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:24.675154  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:24.700453  836363 cri.go:89] found id: ""
	I1210 06:40:24.700466  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.700474  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:24.700479  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:24.700537  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:24.726867  836363 cri.go:89] found id: ""
	I1210 06:40:24.726881  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.726887  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:24.726893  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:24.726955  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:24.751980  836363 cri.go:89] found id: ""
	I1210 06:40:24.751994  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.752002  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:24.752007  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:24.752068  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:24.782328  836363 cri.go:89] found id: ""
	I1210 06:40:24.782342  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.782349  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:24.782357  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:24.782367  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:24.845411  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:24.845431  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:24.874554  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:24.874571  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:24.930797  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:24.930817  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:24.947891  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:24.947910  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:25.021562  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:25.011415   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.012701   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.013093   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.014987   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.015492   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:25.011415   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.012701   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.013093   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.014987   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.015492   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:27.522215  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:27.533345  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:27.533449  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:27.562516  836363 cri.go:89] found id: ""
	I1210 06:40:27.562529  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.562538  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:27.562543  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:27.562612  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:27.589053  836363 cri.go:89] found id: ""
	I1210 06:40:27.589081  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.589089  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:27.589098  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:27.589171  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:27.614058  836363 cri.go:89] found id: ""
	I1210 06:40:27.614072  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.614079  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:27.614084  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:27.614142  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:27.639274  836363 cri.go:89] found id: ""
	I1210 06:40:27.639288  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.639296  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:27.639310  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:27.639369  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:27.667535  836363 cri.go:89] found id: ""
	I1210 06:40:27.667549  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.667556  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:27.667561  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:27.667630  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:27.691075  836363 cri.go:89] found id: ""
	I1210 06:40:27.691090  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.691097  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:27.691102  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:27.691161  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:27.716129  836363 cri.go:89] found id: ""
	I1210 06:40:27.716142  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.716150  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:27.716157  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:27.716168  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:27.771440  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:27.771460  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:27.788230  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:27.788248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:27.854509  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:27.846532   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.847111   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.848660   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.849114   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.850650   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:27.846532   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.847111   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.848660   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.849114   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.850650   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:27.854521  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:27.854533  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:27.922148  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:27.922172  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:30.451005  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:30.461920  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:30.461982  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:30.489712  836363 cri.go:89] found id: ""
	I1210 06:40:30.489727  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.489734  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:30.489739  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:30.489800  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:30.513093  836363 cri.go:89] found id: ""
	I1210 06:40:30.513107  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.513114  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:30.513119  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:30.513196  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:30.539836  836363 cri.go:89] found id: ""
	I1210 06:40:30.539850  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.539857  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:30.539862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:30.539921  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:30.563675  836363 cri.go:89] found id: ""
	I1210 06:40:30.563689  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.563696  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:30.563701  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:30.563768  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:30.587925  836363 cri.go:89] found id: ""
	I1210 06:40:30.587939  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.587946  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:30.587951  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:30.588014  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:30.612003  836363 cri.go:89] found id: ""
	I1210 06:40:30.612018  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.612025  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:30.612031  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:30.612094  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:30.640838  836363 cri.go:89] found id: ""
	I1210 06:40:30.640853  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.640860  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:30.640868  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:30.640879  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:30.696168  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:30.696189  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:30.712444  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:30.712461  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:30.779602  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:30.771019   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.771655   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773324   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773861   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.775400   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:30.771019   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.771655   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773324   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773861   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.775400   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:30.779612  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:30.779623  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:30.840751  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:30.840772  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:33.372644  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:33.382802  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:33.382862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:33.407793  836363 cri.go:89] found id: ""
	I1210 06:40:33.407807  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.407815  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:33.407820  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:33.407877  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:33.430878  836363 cri.go:89] found id: ""
	I1210 06:40:33.430892  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.430899  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:33.430904  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:33.430960  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:33.454595  836363 cri.go:89] found id: ""
	I1210 06:40:33.454609  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.454616  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:33.454621  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:33.454678  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:33.479328  836363 cri.go:89] found id: ""
	I1210 06:40:33.479342  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.479349  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:33.479354  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:33.479416  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:33.503717  836363 cri.go:89] found id: ""
	I1210 06:40:33.503731  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.503744  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:33.503750  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:33.503811  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:33.527968  836363 cri.go:89] found id: ""
	I1210 06:40:33.527982  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.527989  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:33.527994  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:33.528076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:33.552452  836363 cri.go:89] found id: ""
	I1210 06:40:33.552465  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.552472  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:33.552480  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:33.552490  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:33.586111  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:33.586127  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:33.644722  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:33.644742  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:33.663073  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:33.663090  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:33.731033  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:33.723109   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.723789   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725296   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725727   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.727228   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:33.723109   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.723789   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725296   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725727   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.727228   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:33.731044  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:33.731060  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:36.294593  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:36.306076  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:36.306134  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:36.334361  836363 cri.go:89] found id: ""
	I1210 06:40:36.334376  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.334383  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:36.334388  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:36.334447  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:36.361890  836363 cri.go:89] found id: ""
	I1210 06:40:36.361904  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.361911  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:36.361916  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:36.361977  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:36.387023  836363 cri.go:89] found id: ""
	I1210 06:40:36.387037  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.387044  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:36.387050  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:36.387109  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:36.411981  836363 cri.go:89] found id: ""
	I1210 06:40:36.411995  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.412011  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:36.412016  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:36.412085  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:36.436105  836363 cri.go:89] found id: ""
	I1210 06:40:36.436119  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.436136  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:36.436142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:36.436215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:36.463709  836363 cri.go:89] found id: ""
	I1210 06:40:36.463724  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.463731  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:36.463737  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:36.463795  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:36.492482  836363 cri.go:89] found id: ""
	I1210 06:40:36.492496  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.492503  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:36.492512  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:36.492522  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:36.551191  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:36.551210  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:36.568166  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:36.568183  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:36.635783  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:36.627478   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.627875   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629429   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629764   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.631231   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:36.627478   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.627875   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629429   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629764   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.631231   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:36.635793  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:36.635806  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:36.706158  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:36.706182  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:39.240421  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:39.250806  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:39.250867  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:39.275350  836363 cri.go:89] found id: ""
	I1210 06:40:39.275363  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.275370  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:39.275375  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:39.275431  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:39.309499  836363 cri.go:89] found id: ""
	I1210 06:40:39.309515  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.309522  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:39.309527  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:39.309605  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:39.335376  836363 cri.go:89] found id: ""
	I1210 06:40:39.335390  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.335397  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:39.335401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:39.335460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:39.364171  836363 cri.go:89] found id: ""
	I1210 06:40:39.364185  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.364192  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:39.364197  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:39.364261  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:39.390366  836363 cri.go:89] found id: ""
	I1210 06:40:39.390381  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.390388  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:39.390393  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:39.390456  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:39.418420  836363 cri.go:89] found id: ""
	I1210 06:40:39.418434  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.418441  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:39.418448  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:39.418525  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:39.443654  836363 cri.go:89] found id: ""
	I1210 06:40:39.443667  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.443674  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:39.443683  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:39.443693  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:39.508605  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:39.508627  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:39.541642  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:39.541657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:39.598637  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:39.598658  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:39.614821  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:39.614837  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:39.681178  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:39.672580   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.673146   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.674858   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.675468   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.677195   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:39.672580   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.673146   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.674858   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.675468   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.677195   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:42.181674  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:42.194020  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:42.194088  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:42.223014  836363 cri.go:89] found id: ""
	I1210 06:40:42.223033  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.223041  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:42.223053  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:42.223128  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:42.250171  836363 cri.go:89] found id: ""
	I1210 06:40:42.250186  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.250193  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:42.250199  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:42.250267  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:42.276322  836363 cri.go:89] found id: ""
	I1210 06:40:42.276343  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.276350  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:42.276356  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:42.276417  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:42.312287  836363 cri.go:89] found id: ""
	I1210 06:40:42.312302  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.312309  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:42.312314  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:42.312379  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:42.339930  836363 cri.go:89] found id: ""
	I1210 06:40:42.339944  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.339951  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:42.339956  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:42.340014  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:42.367830  836363 cri.go:89] found id: ""
	I1210 06:40:42.367844  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.367851  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:42.367857  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:42.367919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:42.392070  836363 cri.go:89] found id: ""
	I1210 06:40:42.392084  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.392091  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:42.392099  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:42.392109  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:42.426049  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:42.426065  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:42.481003  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:42.481025  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:42.497786  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:42.497804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:42.565103  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:42.556363   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.556746   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558351   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558980   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.560866   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:42.556363   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.556746   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558351   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558980   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.560866   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:42.565114  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:42.565124  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:45.129131  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:45.143244  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:45.143317  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:45.185169  836363 cri.go:89] found id: ""
	I1210 06:40:45.185203  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.185235  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:45.185259  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:45.185400  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:45.232743  836363 cri.go:89] found id: ""
	I1210 06:40:45.232760  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.232767  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:45.232774  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:45.232857  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:45.264531  836363 cri.go:89] found id: ""
	I1210 06:40:45.264564  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.264573  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:45.264585  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:45.264652  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:45.304876  836363 cri.go:89] found id: ""
	I1210 06:40:45.304891  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.304898  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:45.304912  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:45.304975  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:45.332686  836363 cri.go:89] found id: ""
	I1210 06:40:45.332700  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.332707  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:45.332713  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:45.332772  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:45.361418  836363 cri.go:89] found id: ""
	I1210 06:40:45.361443  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.361454  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:45.361460  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:45.361549  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:45.389935  836363 cri.go:89] found id: ""
	I1210 06:40:45.389949  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.389955  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:45.389963  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:45.389973  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:45.446063  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:45.446081  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:45.463171  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:45.463188  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:45.529007  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:45.520759   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.521319   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.522920   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.523417   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.524918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:45.520759   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.521319   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.522920   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.523417   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.524918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:45.529017  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:45.529027  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:45.596607  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:45.596629  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.127693  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:48.138167  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:48.138229  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:48.163699  836363 cri.go:89] found id: ""
	I1210 06:40:48.163713  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.163720  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:48.163726  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:48.163788  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:48.187478  836363 cri.go:89] found id: ""
	I1210 06:40:48.187491  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.187498  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:48.187503  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:48.187571  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:48.210551  836363 cri.go:89] found id: ""
	I1210 06:40:48.210565  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.210572  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:48.210577  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:48.210635  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:48.234710  836363 cri.go:89] found id: ""
	I1210 06:40:48.234723  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.234730  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:48.234735  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:48.234792  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:48.257754  836363 cri.go:89] found id: ""
	I1210 06:40:48.257767  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.257774  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:48.257779  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:48.257837  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:48.281482  836363 cri.go:89] found id: ""
	I1210 06:40:48.281497  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.281503  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:48.281508  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:48.281571  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:48.321472  836363 cri.go:89] found id: ""
	I1210 06:40:48.321486  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.321493  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:48.321501  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:48.321519  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.353157  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:48.353176  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:48.414214  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:48.414234  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:48.431305  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:48.431324  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:48.504839  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:48.496885   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.497412   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499192   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499575   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.501075   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:48.496885   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.497412   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499192   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499575   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.501075   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:48.504849  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:48.504860  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:51.069620  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:51.080075  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:51.080142  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:51.110642  836363 cri.go:89] found id: ""
	I1210 06:40:51.110656  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.110663  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:51.110668  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:51.110735  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:51.135875  836363 cri.go:89] found id: ""
	I1210 06:40:51.135889  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.135897  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:51.135902  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:51.135969  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:51.160992  836363 cri.go:89] found id: ""
	I1210 06:40:51.161007  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.161014  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:51.161019  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:51.161079  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:51.190942  836363 cri.go:89] found id: ""
	I1210 06:40:51.190957  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.190964  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:51.190969  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:51.191028  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:51.214853  836363 cri.go:89] found id: ""
	I1210 06:40:51.214866  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.214873  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:51.214878  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:51.214934  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:51.238972  836363 cri.go:89] found id: ""
	I1210 06:40:51.238986  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.238993  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:51.238998  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:51.239056  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:51.263101  836363 cri.go:89] found id: ""
	I1210 06:40:51.263115  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.263122  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:51.263130  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:51.263147  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:51.334552  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:51.325962   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.326878   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328565   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328869   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.330403   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:51.325962   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.326878   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328565   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328869   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.330403   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:51.334562  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:51.334574  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:51.405170  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:51.405189  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:51.433244  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:51.433261  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:51.491472  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:51.491494  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:54.008401  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:54.019572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:54.019640  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:54.049412  836363 cri.go:89] found id: ""
	I1210 06:40:54.049427  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.049434  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:54.049439  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:54.049505  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:54.074298  836363 cri.go:89] found id: ""
	I1210 06:40:54.074313  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.074319  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:54.074324  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:54.074384  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:54.102940  836363 cri.go:89] found id: ""
	I1210 06:40:54.102954  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.102961  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:54.102966  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:54.103030  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:54.127504  836363 cri.go:89] found id: ""
	I1210 06:40:54.127543  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.127556  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:54.127561  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:54.127619  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:54.156807  836363 cri.go:89] found id: ""
	I1210 06:40:54.156822  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.156829  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:54.156833  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:54.156896  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:54.181320  836363 cri.go:89] found id: ""
	I1210 06:40:54.181335  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.181342  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:54.181348  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:54.181406  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:54.205593  836363 cri.go:89] found id: ""
	I1210 06:40:54.205605  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.205612  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:54.205620  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:54.205631  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:54.222285  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:54.222301  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:54.288392  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:54.279932   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.280608   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282205   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282786   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.284468   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:54.279932   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.280608   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282205   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282786   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.284468   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:54.288402  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:54.288423  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:54.357504  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:54.357523  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:54.391376  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:54.391394  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:56.947968  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:56.957769  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:56.957833  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:56.981684  836363 cri.go:89] found id: ""
	I1210 06:40:56.981698  836363 logs.go:282] 0 containers: []
	W1210 06:40:56.981704  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:56.981709  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:56.981773  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:57.008321  836363 cri.go:89] found id: ""
	I1210 06:40:57.008336  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.008344  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:57.008348  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:57.008409  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:57.033150  836363 cri.go:89] found id: ""
	I1210 06:40:57.033164  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.033171  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:57.033175  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:57.033234  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:57.061083  836363 cri.go:89] found id: ""
	I1210 06:40:57.061096  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.061103  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:57.061108  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:57.061167  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:57.084352  836363 cri.go:89] found id: ""
	I1210 06:40:57.084366  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.084372  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:57.084377  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:57.084432  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:57.108194  836363 cri.go:89] found id: ""
	I1210 06:40:57.108225  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.108239  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:57.108244  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:57.108315  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:57.136912  836363 cri.go:89] found id: ""
	I1210 06:40:57.136926  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.136935  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:57.136942  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:57.136953  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:57.198446  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:57.198510  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:57.225389  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:57.225406  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:57.283570  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:57.283589  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:57.301703  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:57.301727  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:57.380612  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:57.372663   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.373165   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.374676   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.375061   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.376625   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:57.372663   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.373165   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.374676   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.375061   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.376625   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:59.880952  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:59.891486  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:59.891569  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:59.915927  836363 cri.go:89] found id: ""
	I1210 06:40:59.915941  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.915947  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:59.915953  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:59.916013  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:59.944178  836363 cri.go:89] found id: ""
	I1210 06:40:59.944192  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.944200  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:59.944205  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:59.944264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:59.969112  836363 cri.go:89] found id: ""
	I1210 06:40:59.969126  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.969133  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:59.969138  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:59.969201  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:59.994908  836363 cri.go:89] found id: ""
	I1210 06:40:59.994922  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.994929  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:59.994934  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:59.994991  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:00.092005  836363 cri.go:89] found id: ""
	I1210 06:41:00.092022  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.092030  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:00.092036  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:00.092110  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:00.176527  836363 cri.go:89] found id: ""
	I1210 06:41:00.176549  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.176557  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:00.176563  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:00.176628  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:00.227381  836363 cri.go:89] found id: ""
	I1210 06:41:00.227398  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.227406  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:00.227414  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:00.227427  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:00.330232  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:00.330255  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:00.363949  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:00.363967  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:00.445659  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:00.436629   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.437562   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439318   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439706   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.441418   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:00.436629   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.437562   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439318   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439706   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.441418   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:00.445669  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:00.445681  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:00.509415  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:00.509440  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:03.043380  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:03.053715  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:03.053796  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:03.079434  836363 cri.go:89] found id: ""
	I1210 06:41:03.079449  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.079456  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:03.079462  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:03.079520  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:03.112748  836363 cri.go:89] found id: ""
	I1210 06:41:03.112761  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.112768  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:03.112773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:03.112831  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:03.137303  836363 cri.go:89] found id: ""
	I1210 06:41:03.137317  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.137324  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:03.137329  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:03.137390  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:03.162303  836363 cri.go:89] found id: ""
	I1210 06:41:03.162317  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.162324  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:03.162329  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:03.162387  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:03.186423  836363 cri.go:89] found id: ""
	I1210 06:41:03.186438  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.186445  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:03.186449  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:03.186542  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:03.215070  836363 cri.go:89] found id: ""
	I1210 06:41:03.215084  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.215091  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:03.215096  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:03.215154  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:03.238820  836363 cri.go:89] found id: ""
	I1210 06:41:03.238834  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.238841  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:03.238850  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:03.238861  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:03.293835  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:03.293853  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:03.312548  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:03.312565  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:03.381504  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:03.373169   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.373896   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.375591   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.376023   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.377455   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:03.373169   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.373896   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.375591   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.376023   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.377455   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:03.381514  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:03.381524  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:03.444806  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:03.444826  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:05.972428  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:05.982168  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:05.982226  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:06.011191  836363 cri.go:89] found id: ""
	I1210 06:41:06.011206  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.011214  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:06.011220  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:06.011295  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:06.038921  836363 cri.go:89] found id: ""
	I1210 06:41:06.038937  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.038944  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:06.038949  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:06.039011  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:06.063412  836363 cri.go:89] found id: ""
	I1210 06:41:06.063426  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.063433  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:06.063438  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:06.063497  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:06.087777  836363 cri.go:89] found id: ""
	I1210 06:41:06.087800  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.087807  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:06.087812  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:06.087881  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:06.112794  836363 cri.go:89] found id: ""
	I1210 06:41:06.112809  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.112815  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:06.112821  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:06.112877  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:06.137620  836363 cri.go:89] found id: ""
	I1210 06:41:06.137634  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.137641  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:06.137645  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:06.137702  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:06.164245  836363 cri.go:89] found id: ""
	I1210 06:41:06.164259  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.164266  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:06.164274  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:06.164331  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:06.219975  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:06.219994  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:06.236571  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:06.236596  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:06.309920  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:06.294848   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.295676   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.297523   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.298004   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.299656   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:06.294848   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.295676   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.297523   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.298004   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.299656   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:06.309934  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:06.309944  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:06.383624  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:06.383646  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:08.911581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:08.923631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:08.923713  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:08.950073  836363 cri.go:89] found id: ""
	I1210 06:41:08.950087  836363 logs.go:282] 0 containers: []
	W1210 06:41:08.950094  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:08.950100  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:08.950157  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:08.976323  836363 cri.go:89] found id: ""
	I1210 06:41:08.976337  836363 logs.go:282] 0 containers: []
	W1210 06:41:08.976345  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:08.976349  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:08.976409  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:09.001975  836363 cri.go:89] found id: ""
	I1210 06:41:09.001991  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.001998  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:09.002004  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:09.002076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:09.027223  836363 cri.go:89] found id: ""
	I1210 06:41:09.027237  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.027250  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:09.027256  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:09.027314  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:09.051870  836363 cri.go:89] found id: ""
	I1210 06:41:09.051884  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.051890  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:09.051896  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:09.051955  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:09.075643  836363 cri.go:89] found id: ""
	I1210 06:41:09.075658  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.075678  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:09.075684  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:09.075740  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:09.100390  836363 cri.go:89] found id: ""
	I1210 06:41:09.100404  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.100411  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:09.100419  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:09.100430  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:09.164481  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:09.156151   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.156953   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.158536   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.159001   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.160652   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:09.156151   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.156953   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.158536   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.159001   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.160652   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:09.164492  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:09.164502  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:09.228784  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:09.228804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:09.256846  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:09.256863  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:09.312682  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:09.312702  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:11.842135  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:11.852673  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:11.852735  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:11.877129  836363 cri.go:89] found id: ""
	I1210 06:41:11.877144  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.877151  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:11.877156  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:11.877215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:11.902763  836363 cri.go:89] found id: ""
	I1210 06:41:11.902777  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.902784  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:11.902789  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:11.902863  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:11.927125  836363 cri.go:89] found id: ""
	I1210 06:41:11.927139  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.927146  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:11.927150  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:11.927206  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:11.966123  836363 cri.go:89] found id: ""
	I1210 06:41:11.966137  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.966144  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:11.966149  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:11.966205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:11.990046  836363 cri.go:89] found id: ""
	I1210 06:41:11.990059  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.990067  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:11.990072  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:11.990132  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:12.015096  836363 cri.go:89] found id: ""
	I1210 06:41:12.015111  836363 logs.go:282] 0 containers: []
	W1210 06:41:12.015118  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:12.015124  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:12.015185  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:12.040883  836363 cri.go:89] found id: ""
	I1210 06:41:12.040897  836363 logs.go:282] 0 containers: []
	W1210 06:41:12.040905  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:12.040912  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:12.040923  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:12.067975  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:12.067991  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:12.124161  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:12.124181  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:12.141074  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:12.141090  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:12.204309  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:12.196503   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.197043   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.198523   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.199003   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.200458   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:12.196503   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.197043   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.198523   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.199003   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.200458   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:12.204325  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:12.204336  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:14.770164  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:14.781008  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:14.781070  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:14.810029  836363 cri.go:89] found id: ""
	I1210 06:41:14.810042  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.810051  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:14.810056  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:14.810115  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:14.834988  836363 cri.go:89] found id: ""
	I1210 06:41:14.835002  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.835009  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:14.835015  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:14.835076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:14.859273  836363 cri.go:89] found id: ""
	I1210 06:41:14.859287  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.859294  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:14.859299  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:14.859358  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:14.884024  836363 cri.go:89] found id: ""
	I1210 06:41:14.884038  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.884045  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:14.884051  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:14.884111  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:14.907573  836363 cri.go:89] found id: ""
	I1210 06:41:14.907587  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.907596  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:14.907601  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:14.907660  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:14.932198  836363 cri.go:89] found id: ""
	I1210 06:41:14.932212  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.932219  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:14.932225  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:14.932285  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:14.957047  836363 cri.go:89] found id: ""
	I1210 06:41:14.957062  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.957069  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:14.957077  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:14.957087  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:15.015819  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:15.015841  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:15.035356  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:15.035387  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:15.111422  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:15.102537   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.103449   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105131   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105642   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.107373   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:15.102537   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.103449   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105131   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105642   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.107373   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:15.111434  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:15.111446  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:15.173911  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:15.173930  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:17.707403  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:17.717581  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:17.717645  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:17.741545  836363 cri.go:89] found id: ""
	I1210 06:41:17.741559  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.741566  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:17.741572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:17.741630  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:17.766133  836363 cri.go:89] found id: ""
	I1210 06:41:17.766147  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.766154  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:17.766159  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:17.766213  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:17.790714  836363 cri.go:89] found id: ""
	I1210 06:41:17.790728  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.790735  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:17.790740  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:17.790795  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:17.814639  836363 cri.go:89] found id: ""
	I1210 06:41:17.814653  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.814660  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:17.814666  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:17.814721  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:17.839269  836363 cri.go:89] found id: ""
	I1210 06:41:17.839283  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.839290  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:17.839295  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:17.839353  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:17.864188  836363 cri.go:89] found id: ""
	I1210 06:41:17.864202  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.864209  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:17.864214  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:17.864273  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:17.889103  836363 cri.go:89] found id: ""
	I1210 06:41:17.889117  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.889124  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:17.889132  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:17.889142  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:17.945534  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:17.945553  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:17.962119  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:17.962136  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:18.031737  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:18.022190   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.023153   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.024970   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.025609   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.027479   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:18.022190   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.023153   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.024970   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.025609   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.027479   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:18.031747  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:18.031758  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:18.095025  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:18.095045  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:20.626616  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:20.637064  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:20.637135  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:20.661085  836363 cri.go:89] found id: ""
	I1210 06:41:20.661098  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.661105  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:20.661110  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:20.661170  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:20.686407  836363 cri.go:89] found id: ""
	I1210 06:41:20.686420  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.686427  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:20.686432  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:20.686519  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:20.710905  836363 cri.go:89] found id: ""
	I1210 06:41:20.710919  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.710926  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:20.710931  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:20.710989  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:20.735241  836363 cri.go:89] found id: ""
	I1210 06:41:20.735255  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.735262  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:20.735268  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:20.735326  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:20.762996  836363 cri.go:89] found id: ""
	I1210 06:41:20.763010  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.763017  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:20.763022  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:20.763080  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:20.793084  836363 cri.go:89] found id: ""
	I1210 06:41:20.793098  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.793105  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:20.793111  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:20.793167  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:20.821259  836363 cri.go:89] found id: ""
	I1210 06:41:20.821274  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.821281  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:20.821289  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:20.821300  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:20.876655  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:20.876676  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:20.894043  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:20.894060  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:20.967195  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:20.958394   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.959075   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.960382   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.961013   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.962652   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:20.958394   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.959075   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.960382   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.961013   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.962652   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:20.967206  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:20.967217  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:21.028930  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:21.028949  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:23.559672  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:23.572318  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:23.572395  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:23.603800  836363 cri.go:89] found id: ""
	I1210 06:41:23.603814  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.603821  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:23.603827  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:23.603900  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:23.634190  836363 cri.go:89] found id: ""
	I1210 06:41:23.634205  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.634212  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:23.634217  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:23.634277  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:23.664876  836363 cri.go:89] found id: ""
	I1210 06:41:23.664890  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.664898  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:23.664904  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:23.664974  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:23.693167  836363 cri.go:89] found id: ""
	I1210 06:41:23.693182  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.693189  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:23.693196  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:23.693264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:23.719371  836363 cri.go:89] found id: ""
	I1210 06:41:23.719385  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.719393  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:23.719398  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:23.719460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:23.745307  836363 cri.go:89] found id: ""
	I1210 06:41:23.745321  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.745328  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:23.745334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:23.745399  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:23.773016  836363 cri.go:89] found id: ""
	I1210 06:41:23.773031  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.773038  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:23.773046  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:23.773056  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:23.829249  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:23.829268  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:23.846743  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:23.846761  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:23.915363  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:23.907095   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.907839   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.909482   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.910101   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.911295   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:23.907095   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.907839   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.909482   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.910101   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.911295   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:23.915374  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:23.915385  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:23.977818  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:23.977838  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:26.512080  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:26.522967  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:26.523031  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:26.556941  836363 cri.go:89] found id: ""
	I1210 06:41:26.556955  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.556962  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:26.556967  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:26.557028  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:26.583709  836363 cri.go:89] found id: ""
	I1210 06:41:26.583723  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.583731  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:26.583737  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:26.583794  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:26.620398  836363 cri.go:89] found id: ""
	I1210 06:41:26.620411  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.620418  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:26.620424  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:26.620488  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:26.645205  836363 cri.go:89] found id: ""
	I1210 06:41:26.645220  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.645227  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:26.645232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:26.645295  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:26.672971  836363 cri.go:89] found id: ""
	I1210 06:41:26.672985  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.672992  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:26.672996  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:26.673054  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:26.701966  836363 cri.go:89] found id: ""
	I1210 06:41:26.701980  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.701987  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:26.701993  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:26.702051  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:26.726241  836363 cri.go:89] found id: ""
	I1210 06:41:26.726254  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.726261  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:26.726269  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:26.726280  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:26.782519  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:26.782539  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:26.799105  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:26.799127  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:26.869131  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:26.860787   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.861476   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863184   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863795   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.865363   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:26.860787   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.861476   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863184   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863795   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.865363   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:26.869141  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:26.869152  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:26.935169  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:26.935188  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:29.463208  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:29.473355  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:29.473417  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:29.497493  836363 cri.go:89] found id: ""
	I1210 06:41:29.497512  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.497519  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:29.497524  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:29.497584  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:29.525346  836363 cri.go:89] found id: ""
	I1210 06:41:29.525360  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.525366  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:29.525381  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:29.525485  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:29.553583  836363 cri.go:89] found id: ""
	I1210 06:41:29.553596  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.553604  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:29.553609  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:29.553665  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:29.587462  836363 cri.go:89] found id: ""
	I1210 06:41:29.587476  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.587483  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:29.587488  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:29.587559  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:29.625152  836363 cri.go:89] found id: ""
	I1210 06:41:29.625166  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.625173  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:29.625178  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:29.625235  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:29.649760  836363 cri.go:89] found id: ""
	I1210 06:41:29.649773  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.649781  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:29.649786  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:29.649843  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:29.674875  836363 cri.go:89] found id: ""
	I1210 06:41:29.674889  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.674897  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:29.674904  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:29.674916  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:29.691346  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:29.691363  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:29.753565  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:29.745153   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.745754   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.747557   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.748093   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.749766   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:29.745153   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.745754   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.747557   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.748093   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.749766   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:29.753580  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:29.753591  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:29.815732  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:29.815751  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:29.848125  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:29.848141  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:32.408296  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:32.419204  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:32.419279  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:32.445527  836363 cri.go:89] found id: ""
	I1210 06:41:32.445542  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.445548  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:32.445553  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:32.445611  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:32.470075  836363 cri.go:89] found id: ""
	I1210 06:41:32.470088  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.470095  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:32.470108  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:32.470164  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:32.494632  836363 cri.go:89] found id: ""
	I1210 06:41:32.494647  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.494654  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:32.494658  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:32.494732  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:32.522542  836363 cri.go:89] found id: ""
	I1210 06:41:32.522555  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.522568  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:32.522574  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:32.522641  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:32.557483  836363 cri.go:89] found id: ""
	I1210 06:41:32.557498  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.557505  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:32.557511  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:32.557570  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:32.586583  836363 cri.go:89] found id: ""
	I1210 06:41:32.586598  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.586605  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:32.586611  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:32.586673  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:32.614984  836363 cri.go:89] found id: ""
	I1210 06:41:32.614997  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.615004  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:32.615012  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:32.615023  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:32.677103  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:32.669262   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.669805   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671272   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671743   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.673216   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:32.669262   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.669805   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671272   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671743   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.673216   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:32.677113  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:32.677123  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:32.738003  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:32.738022  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:32.765472  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:32.765488  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:32.822384  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:32.822406  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:35.339259  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:35.349700  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:35.349758  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:35.375337  836363 cri.go:89] found id: ""
	I1210 06:41:35.375359  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.375366  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:35.375371  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:35.375449  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:35.399613  836363 cri.go:89] found id: ""
	I1210 06:41:35.399627  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.399634  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:35.399639  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:35.399696  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:35.423561  836363 cri.go:89] found id: ""
	I1210 06:41:35.423575  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.423582  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:35.423588  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:35.423650  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:35.448165  836363 cri.go:89] found id: ""
	I1210 06:41:35.448179  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.448186  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:35.448198  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:35.448256  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:35.476970  836363 cri.go:89] found id: ""
	I1210 06:41:35.476984  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.476992  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:35.476997  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:35.477062  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:35.500993  836363 cri.go:89] found id: ""
	I1210 06:41:35.501007  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.501024  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:35.501029  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:35.501087  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:35.530273  836363 cri.go:89] found id: ""
	I1210 06:41:35.530294  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.530301  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:35.530309  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:35.530320  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:35.588229  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:35.588248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:35.608295  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:35.608311  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:35.673227  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:35.664447   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.665198   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667057   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667693   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.669348   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:35.664447   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.665198   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667057   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667693   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.669348   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:35.673237  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:35.673248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:35.735230  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:35.735250  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:38.262657  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:38.273339  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:38.273403  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:38.298561  836363 cri.go:89] found id: ""
	I1210 06:41:38.298576  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.298583  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:38.298588  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:38.298647  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:38.323273  836363 cri.go:89] found id: ""
	I1210 06:41:38.323294  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.323301  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:38.323306  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:38.323369  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:38.348694  836363 cri.go:89] found id: ""
	I1210 06:41:38.348709  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.348716  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:38.348721  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:38.348777  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:38.374030  836363 cri.go:89] found id: ""
	I1210 06:41:38.374044  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.374052  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:38.374057  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:38.374116  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:38.399116  836363 cri.go:89] found id: ""
	I1210 06:41:38.399130  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.399137  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:38.399142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:38.399205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:38.431922  836363 cri.go:89] found id: ""
	I1210 06:41:38.431936  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.431943  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:38.431954  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:38.432015  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:38.456101  836363 cri.go:89] found id: ""
	I1210 06:41:38.456115  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.456122  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:38.456130  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:38.456140  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:38.511923  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:38.511943  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:38.528342  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:38.528360  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:38.608737  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:38.599653   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.600438   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.601979   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.602518   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.604301   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:38.599653   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.600438   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.601979   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.602518   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.604301   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:38.608759  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:38.608770  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:38.671052  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:38.671073  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:41.199012  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:41.208683  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:41.208748  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:41.232632  836363 cri.go:89] found id: ""
	I1210 06:41:41.232645  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.232652  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:41.232657  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:41.232718  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:41.255309  836363 cri.go:89] found id: ""
	I1210 06:41:41.255322  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.255329  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:41.255334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:41.255388  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:41.279539  836363 cri.go:89] found id: ""
	I1210 06:41:41.279553  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.279560  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:41.279565  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:41.279636  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:41.306855  836363 cri.go:89] found id: ""
	I1210 06:41:41.306870  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.306877  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:41.306882  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:41.306943  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:41.331914  836363 cri.go:89] found id: ""
	I1210 06:41:41.331927  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.331933  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:41.331938  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:41.331998  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:41.355926  836363 cri.go:89] found id: ""
	I1210 06:41:41.355940  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.355947  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:41.355952  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:41.356022  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:41.380191  836363 cri.go:89] found id: ""
	I1210 06:41:41.380205  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.380213  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:41.380221  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:41.380237  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:41.396613  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:41.396631  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:41.460969  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:41.452836   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.453418   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455027   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455521   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.457097   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:41.452836   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.453418   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455027   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455521   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.457097   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:41.460979  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:41.460991  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:41.522046  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:41.522066  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:41.556015  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:41.556032  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:44.133635  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:44.143661  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:44.143725  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:44.170247  836363 cri.go:89] found id: ""
	I1210 06:41:44.170262  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.170269  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:44.170274  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:44.170341  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:44.195020  836363 cri.go:89] found id: ""
	I1210 06:41:44.195034  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.195040  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:44.195045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:44.195101  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:44.219352  836363 cri.go:89] found id: ""
	I1210 06:41:44.219366  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.219373  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:44.219378  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:44.219435  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:44.247508  836363 cri.go:89] found id: ""
	I1210 06:41:44.247522  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.247529  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:44.247534  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:44.247593  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:44.271983  836363 cri.go:89] found id: ""
	I1210 06:41:44.271997  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.272004  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:44.272009  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:44.272066  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:44.295908  836363 cri.go:89] found id: ""
	I1210 06:41:44.295922  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.295928  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:44.295934  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:44.295993  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:44.324246  836363 cri.go:89] found id: ""
	I1210 06:41:44.324260  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.324266  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:44.324275  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:44.324285  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:44.387028  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:44.387048  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:44.415316  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:44.415332  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:44.471125  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:44.471146  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:44.487999  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:44.488017  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:44.555772  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:44.542191   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.542827   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545275   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545612   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.547112   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:44.542191   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.542827   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545275   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545612   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.547112   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:47.056814  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:47.066882  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:47.066943  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:47.091827  836363 cri.go:89] found id: ""
	I1210 06:41:47.091841  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.091848  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:47.091853  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:47.091910  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:47.115556  836363 cri.go:89] found id: ""
	I1210 06:41:47.115571  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.115578  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:47.115583  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:47.115640  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:47.140381  836363 cri.go:89] found id: ""
	I1210 06:41:47.140395  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.140402  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:47.140407  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:47.140466  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:47.164584  836363 cri.go:89] found id: ""
	I1210 06:41:47.164599  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.164606  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:47.164611  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:47.164669  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:47.188952  836363 cri.go:89] found id: ""
	I1210 06:41:47.188966  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.188973  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:47.188978  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:47.189036  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:47.215501  836363 cri.go:89] found id: ""
	I1210 06:41:47.215515  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.215522  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:47.215528  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:47.215594  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:47.248270  836363 cri.go:89] found id: ""
	I1210 06:41:47.248284  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.248291  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:47.248301  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:47.248312  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:47.264763  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:47.264780  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:47.328736  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:47.319556   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.320385   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.321992   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.322636   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.324430   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:47.319556   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.320385   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.321992   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.322636   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.324430   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:47.328762  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:47.328773  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:47.391108  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:47.391129  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:47.421573  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:47.421590  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:49.978044  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:49.988396  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:49.988461  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:50.019406  836363 cri.go:89] found id: ""
	I1210 06:41:50.019422  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.019430  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:50.019436  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:50.019525  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:50.046394  836363 cri.go:89] found id: ""
	I1210 06:41:50.046409  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.046416  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:50.046421  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:50.046513  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:50.073199  836363 cri.go:89] found id: ""
	I1210 06:41:50.073213  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.073220  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:50.073225  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:50.073287  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:50.099702  836363 cri.go:89] found id: ""
	I1210 06:41:50.099716  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.099722  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:50.099728  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:50.099787  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:50.128872  836363 cri.go:89] found id: ""
	I1210 06:41:50.128886  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.128893  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:50.128898  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:50.128956  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:50.153319  836363 cri.go:89] found id: ""
	I1210 06:41:50.153333  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.153340  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:50.153346  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:50.153404  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:50.180949  836363 cri.go:89] found id: ""
	I1210 06:41:50.180962  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.180968  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:50.180976  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:50.180986  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:50.242900  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:50.242922  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:50.273618  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:50.273634  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:50.328466  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:50.328485  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:50.344888  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:50.344905  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:50.410799  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:50.402194   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.403126   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.404850   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.405142   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.406780   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:50.402194   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.403126   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.404850   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.405142   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.406780   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:52.911683  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:52.922118  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:52.922186  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:52.947907  836363 cri.go:89] found id: ""
	I1210 06:41:52.947922  836363 logs.go:282] 0 containers: []
	W1210 06:41:52.947930  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:52.947935  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:52.948002  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:52.974796  836363 cri.go:89] found id: ""
	I1210 06:41:52.974812  836363 logs.go:282] 0 containers: []
	W1210 06:41:52.974820  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:52.974826  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:52.974885  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:53.005919  836363 cri.go:89] found id: ""
	I1210 06:41:53.005935  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.005942  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:53.005950  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:53.006027  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:53.033320  836363 cri.go:89] found id: ""
	I1210 06:41:53.033333  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.033340  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:53.033345  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:53.033405  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:53.061819  836363 cri.go:89] found id: ""
	I1210 06:41:53.061834  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.061851  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:53.061857  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:53.061924  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:53.086290  836363 cri.go:89] found id: ""
	I1210 06:41:53.086304  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.086311  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:53.086316  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:53.086374  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:53.111667  836363 cri.go:89] found id: ""
	I1210 06:41:53.111681  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.111697  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:53.111706  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:53.111716  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:53.168392  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:53.168412  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:53.185807  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:53.185823  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:53.254387  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:53.246258   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.246805   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248540   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248996   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.250535   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:53.246258   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.246805   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248540   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248996   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.250535   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:53.254397  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:53.254408  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:53.319043  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:53.319063  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:55.851295  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:55.861334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:55.861402  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:55.886929  836363 cri.go:89] found id: ""
	I1210 06:41:55.886949  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.886957  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:55.886962  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:55.887020  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:55.915116  836363 cri.go:89] found id: ""
	I1210 06:41:55.915130  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.915138  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:55.915142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:55.915200  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:55.939013  836363 cri.go:89] found id: ""
	I1210 06:41:55.939033  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.939040  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:55.939045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:55.939101  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:55.964369  836363 cri.go:89] found id: ""
	I1210 06:41:55.964383  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.964390  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:55.964395  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:55.964455  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:55.989465  836363 cri.go:89] found id: ""
	I1210 06:41:55.989478  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.989485  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:55.989491  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:55.989557  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:56.014203  836363 cri.go:89] found id: ""
	I1210 06:41:56.014218  836363 logs.go:282] 0 containers: []
	W1210 06:41:56.014225  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:56.014230  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:56.014336  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:56.043892  836363 cri.go:89] found id: ""
	I1210 06:41:56.043906  836363 logs.go:282] 0 containers: []
	W1210 06:41:56.043916  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:56.043925  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:56.043936  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:56.112761  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:56.104681   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.105226   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.106816   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.107362   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.108899   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:56.104681   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.105226   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.106816   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.107362   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.108899   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:56.112770  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:56.112781  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:56.174642  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:56.174662  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:56.202947  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:56.202963  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:56.259062  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:56.259082  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:58.776033  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:58.786675  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:58.786737  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:58.822543  836363 cri.go:89] found id: ""
	I1210 06:41:58.822557  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.822563  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:58.822572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:58.822634  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:58.848835  836363 cri.go:89] found id: ""
	I1210 06:41:58.848850  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.848857  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:58.848862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:58.848919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:58.876530  836363 cri.go:89] found id: ""
	I1210 06:41:58.876544  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.876551  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:58.876556  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:58.876615  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:58.901700  836363 cri.go:89] found id: ""
	I1210 06:41:58.901714  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.901728  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:58.901733  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:58.901791  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:58.928495  836363 cri.go:89] found id: ""
	I1210 06:41:58.928509  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.928515  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:58.928520  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:58.928577  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:58.952415  836363 cri.go:89] found id: ""
	I1210 06:41:58.952428  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.952435  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:58.952440  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:58.952496  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:58.981756  836363 cri.go:89] found id: ""
	I1210 06:41:58.981771  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.981788  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:58.981797  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:58.981809  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:59.049361  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:59.041702   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.042693   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.043691   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.044312   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.045540   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:59.041702   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.042693   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.043691   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.044312   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.045540   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:59.049372  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:59.049382  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:59.111079  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:59.111098  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:59.141459  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:59.141474  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:59.199670  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:59.199691  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:01.716854  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:01.728404  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:01.728475  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:01.756029  836363 cri.go:89] found id: ""
	I1210 06:42:01.756042  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.756049  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:01.756054  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:01.756109  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:01.780969  836363 cri.go:89] found id: ""
	I1210 06:42:01.780983  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.780990  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:01.780995  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:01.781055  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:01.820198  836363 cri.go:89] found id: ""
	I1210 06:42:01.820212  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.820219  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:01.820224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:01.820284  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:01.848531  836363 cri.go:89] found id: ""
	I1210 06:42:01.848546  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.848553  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:01.848558  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:01.848617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:01.878420  836363 cri.go:89] found id: ""
	I1210 06:42:01.878433  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.878441  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:01.878448  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:01.878534  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:01.905311  836363 cri.go:89] found id: ""
	I1210 06:42:01.905325  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.905344  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:01.905350  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:01.905421  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:01.929912  836363 cri.go:89] found id: ""
	I1210 06:42:01.929926  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.929944  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:01.929953  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:01.929963  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:01.985928  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:01.985948  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:02.003638  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:02.003657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:02.075789  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:02.068334   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.068871   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070020   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070533   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.071984   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:02.068334   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.068871   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070020   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070533   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.071984   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:02.075800  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:02.075810  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:02.136779  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:02.136798  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:04.664122  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:04.675095  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:04.675159  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:04.699777  836363 cri.go:89] found id: ""
	I1210 06:42:04.699800  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.699808  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:04.699814  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:04.699911  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:04.724439  836363 cri.go:89] found id: ""
	I1210 06:42:04.724461  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.724468  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:04.724473  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:04.724538  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:04.750165  836363 cri.go:89] found id: ""
	I1210 06:42:04.750179  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.750187  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:04.750192  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:04.750260  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:04.775655  836363 cri.go:89] found id: ""
	I1210 06:42:04.775669  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.775676  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:04.775681  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:04.775740  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:04.805746  836363 cri.go:89] found id: ""
	I1210 06:42:04.805759  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.805776  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:04.805782  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:04.805849  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:04.836239  836363 cri.go:89] found id: ""
	I1210 06:42:04.836261  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.836269  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:04.836275  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:04.836344  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:04.862854  836363 cri.go:89] found id: ""
	I1210 06:42:04.862868  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.862875  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:04.862883  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:04.862893  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:04.922415  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:04.922435  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:04.939187  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:04.939203  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:05.006750  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:04.996413   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.996887   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998439   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998946   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:05.000828   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:04.996413   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.996887   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998439   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998946   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:05.000828   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:05.006762  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:05.006773  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:05.070511  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:05.070533  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:07.606355  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:07.617096  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:07.617156  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:07.642031  836363 cri.go:89] found id: ""
	I1210 06:42:07.642047  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.642054  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:07.642060  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:07.642117  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:07.670075  836363 cri.go:89] found id: ""
	I1210 06:42:07.670089  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.670107  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:07.670114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:07.670174  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:07.695503  836363 cri.go:89] found id: ""
	I1210 06:42:07.695517  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.695534  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:07.695539  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:07.695613  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:07.719792  836363 cri.go:89] found id: ""
	I1210 06:42:07.719805  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.719813  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:07.719818  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:07.719875  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:07.742885  836363 cri.go:89] found id: ""
	I1210 06:42:07.742899  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.742906  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:07.742911  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:07.742972  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:07.766658  836363 cri.go:89] found id: ""
	I1210 06:42:07.766672  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.766679  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:07.766684  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:07.766742  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:07.790890  836363 cri.go:89] found id: ""
	I1210 06:42:07.790917  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.790924  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:07.790932  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:07.790943  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:07.832030  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:07.832053  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:07.897794  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:07.897815  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:07.914747  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:07.914765  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:07.985400  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:07.977663   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.978174   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.979730   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.980136   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.981623   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:07.977663   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.978174   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.979730   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.980136   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.981623   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:07.985411  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:07.985422  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:10.549627  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:10.559818  836363 kubeadm.go:602] duration metric: took 4m3.540459063s to restartPrimaryControlPlane
	W1210 06:42:10.559885  836363 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:42:10.559961  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:42:10.971123  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:10.985022  836363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:42:10.992941  836363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:42:10.992994  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:42:11.001748  836363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:42:11.001760  836363 kubeadm.go:158] found existing configuration files:
	
	I1210 06:42:11.001824  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:42:11.011668  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:42:11.011736  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:42:11.019850  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:42:11.027722  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:42:11.027783  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:42:11.035605  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:42:11.043216  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:42:11.043273  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:42:11.050854  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:42:11.058765  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:42:11.058844  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:42:11.066934  836363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:42:11.105523  836363 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:42:11.105575  836363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:42:11.188151  836363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:42:11.188218  836363 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:42:11.188255  836363 kubeadm.go:319] OS: Linux
	I1210 06:42:11.188304  836363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:42:11.188354  836363 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:42:11.188398  836363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:42:11.188448  836363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:42:11.188493  836363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:42:11.188543  836363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:42:11.188590  836363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:42:11.188634  836363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:42:11.188683  836363 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:42:11.250124  836363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:42:11.250230  836363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:42:11.250322  836363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:42:11.255308  836363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:42:11.258775  836363 out.go:252]   - Generating certificates and keys ...
	I1210 06:42:11.258873  836363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:42:11.258950  836363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:42:11.259045  836363 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:42:11.259113  836363 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:42:11.259184  836363 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:42:11.259237  836363 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:42:11.259299  836363 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:42:11.259360  836363 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:42:11.259435  836363 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:42:11.259512  836363 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:42:11.259731  836363 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:42:11.259789  836363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:42:12.423232  836363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:42:12.577934  836363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:42:12.783953  836363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:42:13.093269  836363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:42:13.330460  836363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:42:13.331164  836363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:42:13.333749  836363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:42:13.336840  836363 out.go:252]   - Booting up control plane ...
	I1210 06:42:13.336937  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:42:13.337013  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:42:13.337083  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:42:13.358981  836363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:42:13.359103  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:42:13.368350  836363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:42:13.369623  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:42:13.370235  836363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:42:13.505873  836363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:42:13.506077  836363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:46:13.506731  836363 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00070392s
	I1210 06:46:13.506763  836363 kubeadm.go:319] 
	I1210 06:46:13.506850  836363 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:46:13.506894  836363 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:46:13.506999  836363 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:46:13.507005  836363 kubeadm.go:319] 
	I1210 06:46:13.507125  836363 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:46:13.507158  836363 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:46:13.507196  836363 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:46:13.507200  836363 kubeadm.go:319] 
	I1210 06:46:13.511687  836363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:46:13.512136  836363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:46:13.512245  836363 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:46:13.512495  836363 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:46:13.512501  836363 kubeadm.go:319] 
	I1210 06:46:13.512574  836363 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:46:13.512709  836363 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00070392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:46:13.512792  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:46:13.924248  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:46:13.937517  836363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:46:13.937579  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:46:13.945462  836363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:46:13.945471  836363 kubeadm.go:158] found existing configuration files:
	
	I1210 06:46:13.945523  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:46:13.953499  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:46:13.953555  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:46:13.961232  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:46:13.969190  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:46:13.969248  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:46:13.976966  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:46:13.984824  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:46:13.984878  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:46:13.992414  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:46:14.002049  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:46:14.002141  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:46:14.011865  836363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:46:14.052323  836363 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:46:14.052372  836363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:46:14.126225  836363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:46:14.126291  836363 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:46:14.126325  836363 kubeadm.go:319] OS: Linux
	I1210 06:46:14.126369  836363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:46:14.126415  836363 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:46:14.126482  836363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:46:14.126530  836363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:46:14.126577  836363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:46:14.126624  836363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:46:14.126668  836363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:46:14.126716  836363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:46:14.126761  836363 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:46:14.195770  836363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:46:14.195873  836363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:46:14.195962  836363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:46:14.202979  836363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:46:14.208298  836363 out.go:252]   - Generating certificates and keys ...
	I1210 06:46:14.208399  836363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:46:14.208478  836363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:46:14.208559  836363 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:46:14.208622  836363 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:46:14.208696  836363 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:46:14.208754  836363 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:46:14.208821  836363 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:46:14.208886  836363 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:46:14.208964  836363 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:46:14.209040  836363 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:46:14.209080  836363 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:46:14.209138  836363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:46:14.596166  836363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:46:14.891862  836363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:46:14.944957  836363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:46:15.236183  836363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:46:15.354206  836363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:46:15.354795  836363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:46:15.357335  836363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:46:15.360719  836363 out.go:252]   - Booting up control plane ...
	I1210 06:46:15.360814  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:46:15.360889  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:46:15.360954  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:46:15.381031  836363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:46:15.381140  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:46:15.389841  836363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:46:15.391023  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:46:15.391179  836363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:46:15.526794  836363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:46:15.526907  836363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:50:15.527073  836363 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000371584s
	I1210 06:50:15.527097  836363 kubeadm.go:319] 
	I1210 06:50:15.527182  836363 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:50:15.527235  836363 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:50:15.527340  836363 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:50:15.527347  836363 kubeadm.go:319] 
	I1210 06:50:15.527451  836363 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:50:15.527482  836363 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:50:15.527512  836363 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:50:15.527515  836363 kubeadm.go:319] 
	I1210 06:50:15.531196  836363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:50:15.531609  836363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:50:15.531716  836363 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:50:15.531977  836363 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:50:15.531981  836363 kubeadm.go:319] 
	I1210 06:50:15.532049  836363 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:50:15.532106  836363 kubeadm.go:403] duration metric: took 12m8.555678628s to StartCluster
	I1210 06:50:15.532150  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:15.532210  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:15.570548  836363 cri.go:89] found id: ""
	I1210 06:50:15.570562  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.570569  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:50:15.570575  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:50:15.570641  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:15.600057  836363 cri.go:89] found id: ""
	I1210 06:50:15.600071  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.600078  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:50:15.600083  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:50:15.600143  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:15.630207  836363 cri.go:89] found id: ""
	I1210 06:50:15.630221  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.630228  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:50:15.630232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:15.630288  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:15.654767  836363 cri.go:89] found id: ""
	I1210 06:50:15.654781  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.654788  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:50:15.654793  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:15.654853  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:15.678797  836363 cri.go:89] found id: ""
	I1210 06:50:15.678823  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.678830  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:15.678835  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:15.678895  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:15.707130  836363 cri.go:89] found id: ""
	I1210 06:50:15.707144  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.707151  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:50:15.707157  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:15.707215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:15.732682  836363 cri.go:89] found id: ""
	I1210 06:50:15.732696  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.732703  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:15.732711  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:15.732725  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:15.749626  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:15.749643  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:15.820658  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:50:15.811023   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.812026   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.813622   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.814217   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.815852   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:50:15.811023   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.812026   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.813622   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.814217   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.815852   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:15.820670  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:50:15.820682  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:50:15.883000  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:50:15.883021  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:15.913106  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:15.913122  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 06:50:15.972159  836363 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:50:15.972201  836363 out.go:285] * 
	W1210 06:50:15.972316  836363 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:15.972359  836363 out.go:285] * 
	W1210 06:50:15.974510  836363 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:50:15.979994  836363 out.go:203] 
	W1210 06:50:15.983642  836363 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:15.983686  836363 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:50:15.983706  836363 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:50:15.987432  836363 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445107196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445121990Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445162984Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445179287Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445188756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445200998Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445209959Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445223464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445238939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445267518Z" level=info msg="Connect containerd service"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445551476Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.446055950Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466617657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466678671Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466705092Z" level=info msg="Start subscribing containerd event"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466755874Z" level=info msg="Start recovering state"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511858771Z" level=info msg="Start event monitor"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511903539Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511912844Z" level=info msg="Start streaming server"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511923740Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511932676Z" level=info msg="runtime interface starting up..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511939502Z" level=info msg="starting plugins..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511951014Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:38:05 functional-534748 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.523710063Z" level=info msg="containerd successfully booted in 0.098844s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:50:17.222060   20994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:17.222707   20994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:17.224248   20994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:17.224639   20994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:17.226115   20994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:50:17 up  5:32,  0 user,  load average: 0.61, 0.24, 0.46
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:50:14 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:14 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 06:50:14 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:14 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:14 functional-534748 kubelet[20800]: E1210 06:50:14.839067   20800 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:14 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:14 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:15 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 06:50:15 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:15 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:15 functional-534748 kubelet[20810]: E1210 06:50:15.604481   20810 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:15 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:15 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:16 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 06:50:16 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:16 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:16 functional-534748 kubelet[20904]: E1210 06:50:16.331570   20904 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:16 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:16 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:17 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 06:50:17 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:17 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:17 functional-534748 kubelet[20967]: E1210 06:50:17.103850   20967 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:17 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:17 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (335.439355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-534748 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-534748 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (61.992575ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-534748 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (289.985398ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-634209 image ls --format yaml --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ functional-634209 ssh pgrep buildkitd                                                                                                                   │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ image   │ functional-634209 image ls --format json --alsologtostderr                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr                                                  │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls --format table --alsologtostderr                                                                                             │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ image   │ functional-634209 image ls                                                                                                                              │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p functional-634209                                                                                                                                    │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ start   │ -p functional-534748 --alsologtostderr -v=8                                                                                                             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:31 UTC │                     │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add registry.k8s.io/pause:latest                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache add minikube-local-cache-test:functional-534748                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ functional-534748 cache delete minikube-local-cache-test:functional-534748                                                                              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl images                                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	│ cache   │ functional-534748 cache reload                                                                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ kubectl │ functional-534748 kubectl -- --context functional-534748 get pods                                                                                       │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	│ start   │ -p functional-534748 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:38:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:38:02.996848  836363 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:38:02.996953  836363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:02.996957  836363 out.go:374] Setting ErrFile to fd 2...
	I1210 06:38:02.996961  836363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:02.997226  836363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:38:02.997576  836363 out.go:368] Setting JSON to false
	I1210 06:38:02.998612  836363 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19207,"bootTime":1765329476,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:38:02.998671  836363 start.go:143] virtualization:  
	I1210 06:38:03.004094  836363 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:38:03.007279  836363 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:38:03.007472  836363 notify.go:221] Checking for updates...
	I1210 06:38:03.013532  836363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:38:03.016433  836363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:38:03.019434  836363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:38:03.022270  836363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:38:03.025162  836363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:38:03.028574  836363 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:38:03.028673  836363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:38:03.063427  836363 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:38:03.063527  836363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:03.124292  836363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:38:03.114881143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:03.124387  836363 docker.go:319] overlay module found
	I1210 06:38:03.127603  836363 out.go:179] * Using the docker driver based on existing profile
	I1210 06:38:03.130606  836363 start.go:309] selected driver: docker
	I1210 06:38:03.130616  836363 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:03.130726  836363 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:38:03.130828  836363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:03.183470  836363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:38:03.17400928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:03.183897  836363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:38:03.183921  836363 cni.go:84] Creating CNI manager for ""
	I1210 06:38:03.183969  836363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:03.184018  836363 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:03.188981  836363 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:38:03.191768  836363 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:38:03.194630  836363 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:38:03.197557  836363 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:38:03.197592  836363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:38:03.197600  836363 cache.go:65] Caching tarball of preloaded images
	I1210 06:38:03.197644  836363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:38:03.197695  836363 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:38:03.197704  836363 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:38:03.197812  836363 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:38:03.219374  836363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:38:03.219395  836363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:38:03.219415  836363 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:38:03.219445  836363 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:03.219514  836363 start.go:364] duration metric: took 49.855µs to acquireMachinesLock for "functional-534748"
	I1210 06:38:03.219532  836363 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:38:03.219536  836363 fix.go:54] fixHost starting: 
	I1210 06:38:03.219816  836363 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:38:03.236144  836363 fix.go:112] recreateIfNeeded on functional-534748: state=Running err=<nil>
	W1210 06:38:03.236163  836363 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:38:03.239412  836363 out.go:252] * Updating the running docker "functional-534748" container ...
	I1210 06:38:03.239438  836363 machine.go:94] provisionDockerMachine start ...
	I1210 06:38:03.239539  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.255986  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.256288  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.256294  836363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:38:03.393920  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:38:03.393934  836363 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:38:03.393994  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.411659  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.411963  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.411982  836363 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:38:03.556341  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:38:03.556409  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.574119  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.574414  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.574427  836363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:38:03.711044  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:38:03.711071  836363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:38:03.711104  836363 ubuntu.go:190] setting up certificates
	I1210 06:38:03.711119  836363 provision.go:84] configureAuth start
	I1210 06:38:03.711202  836363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:38:03.730176  836363 provision.go:143] copyHostCerts
	I1210 06:38:03.730250  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:38:03.730257  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:38:03.730338  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:38:03.730431  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:38:03.730435  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:38:03.730459  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:38:03.730669  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:38:03.730673  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:38:03.730699  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:38:03.730787  836363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:38:03.830346  836363 provision.go:177] copyRemoteCerts
	I1210 06:38:03.830399  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:38:03.830448  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.847359  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:03.942214  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:38:03.959615  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:38:03.976341  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:38:03.993197  836363 provision.go:87] duration metric: took 282.055172ms to configureAuth
	I1210 06:38:03.993214  836363 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:38:03.993400  836363 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:38:03.993405  836363 machine.go:97] duration metric: took 753.963524ms to provisionDockerMachine
	I1210 06:38:03.993412  836363 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:38:03.993421  836363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:38:03.993478  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:38:03.993515  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.011825  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.110674  836363 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:38:04.114166  836363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:38:04.114184  836363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:38:04.114196  836363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:38:04.114252  836363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:38:04.114330  836363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:38:04.114407  836363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:38:04.114451  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:38:04.122085  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:38:04.140353  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:38:04.160314  836363 start.go:296] duration metric: took 166.888171ms for postStartSetup
	I1210 06:38:04.160387  836363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:38:04.160439  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.179224  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.271903  836363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:38:04.277112  836363 fix.go:56] duration metric: took 1.057568371s for fixHost
	I1210 06:38:04.277129  836363 start.go:83] releasing machines lock for "functional-534748", held for 1.057608798s
	I1210 06:38:04.277219  836363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:38:04.295104  836363 ssh_runner.go:195] Run: cat /version.json
	I1210 06:38:04.295130  836363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:38:04.295198  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.295203  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.320108  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.320646  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.418978  836363 ssh_runner.go:195] Run: systemctl --version
	I1210 06:38:04.509352  836363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:38:04.513794  836363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:38:04.513869  836363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:38:04.521471  836363 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:38:04.521486  836363 start.go:496] detecting cgroup driver to use...
	I1210 06:38:04.521523  836363 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:38:04.521580  836363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:38:04.537005  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:38:04.550809  836363 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:38:04.550892  836363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:38:04.567139  836363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:38:04.580704  836363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:38:04.697131  836363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:38:04.843057  836363 docker.go:234] disabling docker service ...
	I1210 06:38:04.843134  836363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:38:04.858243  836363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:38:04.871472  836363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:38:04.992555  836363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:38:05.113941  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:38:05.127335  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:38:05.141919  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:38:05.151900  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:38:05.161151  836363 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:38:05.161213  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:38:05.170764  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:05.180471  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:38:05.189238  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:05.197957  836363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:38:05.206107  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:38:05.215515  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:38:05.224555  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:38:05.233326  836363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:38:05.241235  836363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:38:05.248850  836363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:05.372410  836363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:38:05.513843  836363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:38:05.513915  836363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:38:05.519638  836363 start.go:564] Will wait 60s for crictl version
	I1210 06:38:05.519732  836363 ssh_runner.go:195] Run: which crictl
	I1210 06:38:05.524751  836363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:38:05.554788  836363 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:38:05.554852  836363 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:05.575345  836363 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:05.606405  836363 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:38:05.609314  836363 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:38:05.625429  836363 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:38:05.632180  836363 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:38:05.635024  836363 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:38:05.635199  836363 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:38:05.635275  836363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:05.663485  836363 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:38:05.663496  836363 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:38:05.663555  836363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:05.692188  836363 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:38:05.692214  836363 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:38:05.692220  836363 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:38:05.692316  836363 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:38:05.692382  836363 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:38:05.716412  836363 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:38:05.716430  836363 cni.go:84] Creating CNI manager for ""
	I1210 06:38:05.716438  836363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:05.716453  836363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:38:05.716479  836363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:38:05.716586  836363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:38:05.716652  836363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:38:05.724579  836363 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:38:05.724638  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:38:05.732044  836363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:38:05.744806  836363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:38:05.757235  836363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1210 06:38:05.769602  836363 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:38:05.773238  836363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:05.892525  836363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:38:06.296632  836363 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:38:06.296643  836363 certs.go:195] generating shared ca certs ...
	I1210 06:38:06.296658  836363 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:38:06.296809  836363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:38:06.296849  836363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:38:06.296855  836363 certs.go:257] generating profile certs ...
	I1210 06:38:06.296937  836363 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:38:06.297021  836363 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:38:06.297068  836363 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:38:06.297177  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:38:06.297208  836363 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:38:06.297216  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:38:06.297246  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:38:06.297268  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:38:06.297291  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:38:06.297337  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:38:06.297938  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:38:06.317159  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:38:06.336653  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:38:06.357682  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:38:06.376860  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:38:06.394800  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:38:06.412862  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:38:06.430175  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:38:06.447717  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:38:06.465124  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:38:06.482520  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:38:06.500341  836363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:38:06.513157  836363 ssh_runner.go:195] Run: openssl version
	I1210 06:38:06.519293  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.526724  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:38:06.534054  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.537762  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.537817  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.579287  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:38:06.586741  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.593909  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:38:06.601430  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.605107  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.605174  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.646057  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:38:06.653276  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.660757  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:38:06.668784  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.672757  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.672825  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.713985  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:38:06.721257  836363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:38:06.724932  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:38:06.765952  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:38:06.807038  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:38:06.847752  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:38:06.890289  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:38:06.933893  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:38:06.976437  836363 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:06.976545  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:38:06.976606  836363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:07.011412  836363 cri.go:89] found id: ""
	I1210 06:38:07.011470  836363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:38:07.019342  836363 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:38:07.019351  836363 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:38:07.019420  836363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:38:07.026888  836363 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.027424  836363 kubeconfig.go:125] found "functional-534748" server: "https://192.168.49.2:8441"
	I1210 06:38:07.028660  836363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:38:07.037364  836363 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 06:23:31.333930823 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:38:05.762986837 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:38:07.037389  836363 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:38:07.037401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 06:38:07.037465  836363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:07.075015  836363 cri.go:89] found id: ""
	I1210 06:38:07.075109  836363 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:38:07.098429  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:07.106312  836363 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 06:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 06:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 06:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 10 06:27 /etc/kubernetes/scheduler.conf
	
	I1210 06:38:07.106367  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:38:07.114107  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:38:07.122067  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.122121  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:38:07.130176  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:38:07.138001  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.138055  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:07.145554  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:38:07.153390  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.153446  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:07.160768  836363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:07.168493  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:07.213471  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.026655  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.236384  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.298826  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.351741  836363 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:38:08.351821  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:08.852713  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.352205  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.852735  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:10.352309  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:10.851981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:11.352872  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:11.852826  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.352059  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.852894  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:13.352052  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:13.851883  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:14.351956  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:14.852642  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.352606  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.852015  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:16.352784  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:16.852024  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:17.351924  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:17.852941  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.352970  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.852320  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:19.352100  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:19.852911  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:20.352224  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:20.852520  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.352048  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.851954  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:22.352639  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:22.852718  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:23.352574  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:23.851998  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.352693  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.851979  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:25.352948  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:25.852529  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:26.351982  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:26.852421  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.352059  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.851955  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:28.351909  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:28.852783  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.352790  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.852562  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:30.352816  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:30.852170  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:31.352863  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:31.852962  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:32.351970  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:32.852604  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.352940  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.852377  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:34.352015  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:34.852768  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:35.352496  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:35.852012  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.351968  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.852867  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:37.351948  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:37.852026  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.351985  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.852728  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:39.351971  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:39.852981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:40.352705  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:40.852754  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:41.352353  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:41.852845  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.352945  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.852200  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.352581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.851999  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.352537  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.852152  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.352051  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.852697  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.352700  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.852741  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.351895  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.852042  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.352023  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.852686  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.352818  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.852006  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.354621  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.852814  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.352669  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.852320  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.352933  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.852726  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.352653  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.852593  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.352710  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.852520  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.352259  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.851929  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.352781  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.852568  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.352484  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.852171  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.352010  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.852803  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.352685  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.852017  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.353581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.852809  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.352585  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.852755  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.351981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.851998  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.352045  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.851906  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.352316  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.852592  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.351976  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.852799  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.351972  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.852965  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.351946  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.852642  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:08.352868  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:08.352944  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:08.381205  836363 cri.go:89] found id: ""
	I1210 06:39:08.381219  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.381227  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:08.381232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:08.381288  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:08.404633  836363 cri.go:89] found id: ""
	I1210 06:39:08.404646  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.404654  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:08.404659  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:08.404721  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:08.428513  836363 cri.go:89] found id: ""
	I1210 06:39:08.428527  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.428534  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:08.428546  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:08.428606  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:08.453023  836363 cri.go:89] found id: ""
	I1210 06:39:08.453036  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.453043  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:08.453049  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:08.453105  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:08.481527  836363 cri.go:89] found id: ""
	I1210 06:39:08.481540  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.481547  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:08.481552  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:08.481609  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:08.506550  836363 cri.go:89] found id: ""
	I1210 06:39:08.506565  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.506580  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:08.506585  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:08.506649  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:08.531724  836363 cri.go:89] found id: ""
	I1210 06:39:08.531738  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.531745  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:08.531752  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:08.531763  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:08.571815  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:08.571832  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:08.630094  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:08.630112  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:08.647317  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:08.647335  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:08.715592  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:08.706872   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.707541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709199   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709733   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.711367   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:08.706872   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.707541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709199   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709733   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.711367   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:08.715603  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:08.715614  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:11.280652  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:11.290422  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:11.290516  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:11.314331  836363 cri.go:89] found id: ""
	I1210 06:39:11.314345  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.314352  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:11.314357  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:11.314419  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:11.337726  836363 cri.go:89] found id: ""
	I1210 06:39:11.337741  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.337747  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:11.337752  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:11.337812  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:11.365800  836363 cri.go:89] found id: ""
	I1210 06:39:11.365815  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.365821  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:11.365826  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:11.365886  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:11.394804  836363 cri.go:89] found id: ""
	I1210 06:39:11.394818  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.394825  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:11.394830  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:11.394887  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:11.419726  836363 cri.go:89] found id: ""
	I1210 06:39:11.419740  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.419746  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:11.419751  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:11.419810  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:11.445533  836363 cri.go:89] found id: ""
	I1210 06:39:11.445547  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.445554  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:11.445560  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:11.445618  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:11.470212  836363 cri.go:89] found id: ""
	I1210 06:39:11.470227  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.470233  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:11.470241  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:11.470251  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:11.529183  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:11.529202  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:11.546384  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:11.546400  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:11.640312  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:11.631803   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.632307   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.633874   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635142   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635867   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:11.631803   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.632307   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.633874   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635142   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635867   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:11.640322  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:11.640333  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:11.703828  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:11.703850  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:14.230665  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:14.241121  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:14.241183  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:14.268951  836363 cri.go:89] found id: ""
	I1210 06:39:14.268964  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.268974  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:14.268979  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:14.269035  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:14.292742  836363 cri.go:89] found id: ""
	I1210 06:39:14.292761  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.292768  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:14.292773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:14.292838  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:14.317527  836363 cri.go:89] found id: ""
	I1210 06:39:14.317540  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.317547  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:14.317552  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:14.317609  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:14.344738  836363 cri.go:89] found id: ""
	I1210 06:39:14.344751  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.344758  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:14.344764  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:14.344822  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:14.369086  836363 cri.go:89] found id: ""
	I1210 06:39:14.369101  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.369108  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:14.369114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:14.369172  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:14.393919  836363 cri.go:89] found id: ""
	I1210 06:39:14.393932  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.393938  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:14.393943  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:14.394005  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:14.418228  836363 cri.go:89] found id: ""
	I1210 06:39:14.418242  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.418249  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:14.418257  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:14.418267  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:14.481544  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:14.481564  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:14.509051  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:14.509072  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:14.574238  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:14.574259  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:14.594306  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:14.594323  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:14.659264  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:14.651062   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.651501   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653284   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653744   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.655187   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:14.651062   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.651501   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653284   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653744   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.655187   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:17.159960  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:17.169978  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:17.170036  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:17.194333  836363 cri.go:89] found id: ""
	I1210 06:39:17.194347  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.194354  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:17.194359  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:17.194418  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:17.218507  836363 cri.go:89] found id: ""
	I1210 06:39:17.218521  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.218528  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:17.218533  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:17.218617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:17.243499  836363 cri.go:89] found id: ""
	I1210 06:39:17.243513  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.243521  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:17.243527  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:17.243585  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:17.271019  836363 cri.go:89] found id: ""
	I1210 06:39:17.271034  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.271041  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:17.271048  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:17.271106  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:17.296491  836363 cri.go:89] found id: ""
	I1210 06:39:17.296506  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.296513  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:17.296517  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:17.296574  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:17.327127  836363 cri.go:89] found id: ""
	I1210 06:39:17.327142  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.327149  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:17.327156  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:17.327214  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:17.351001  836363 cri.go:89] found id: ""
	I1210 06:39:17.351016  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.351023  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:17.351031  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:17.351046  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:17.408952  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:17.408971  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:17.425660  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:17.425676  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:17.495167  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:17.486424   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.487213   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.488883   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.489501   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.491179   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:17.486424   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.487213   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.488883   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.489501   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.491179   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:17.495179  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:17.495190  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:17.562848  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:17.562868  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:20.100845  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:20.111238  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:20.111303  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:20.135715  836363 cri.go:89] found id: ""
	I1210 06:39:20.135730  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.135737  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:20.135742  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:20.135849  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:20.162728  836363 cri.go:89] found id: ""
	I1210 06:39:20.162742  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.162750  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:20.162754  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:20.162817  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:20.186896  836363 cri.go:89] found id: ""
	I1210 06:39:20.186910  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.186918  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:20.186923  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:20.187033  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:20.211401  836363 cri.go:89] found id: ""
	I1210 06:39:20.211416  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.211423  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:20.211428  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:20.211494  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:20.241049  836363 cri.go:89] found id: ""
	I1210 06:39:20.241063  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.241071  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:20.241075  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:20.241136  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:20.264812  836363 cri.go:89] found id: ""
	I1210 06:39:20.264826  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.264833  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:20.264839  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:20.264905  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:20.289153  836363 cri.go:89] found id: ""
	I1210 06:39:20.289167  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.289179  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:20.289187  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:20.289198  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:20.305825  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:20.305841  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:20.372702  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:20.364207   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.364892   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.366572   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.367140   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.368841   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:20.364207   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.364892   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.366572   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.367140   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.368841   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:20.372716  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:20.372727  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:20.434137  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:20.434156  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:20.462784  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:20.462801  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:23.020338  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:23.033250  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:23.033312  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:23.057227  836363 cri.go:89] found id: ""
	I1210 06:39:23.057241  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.057247  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:23.057252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:23.057310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:23.082261  836363 cri.go:89] found id: ""
	I1210 06:39:23.082275  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.082282  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:23.082287  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:23.082346  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:23.106424  836363 cri.go:89] found id: ""
	I1210 06:39:23.106438  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.106445  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:23.106451  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:23.106554  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:23.132399  836363 cri.go:89] found id: ""
	I1210 06:39:23.132414  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.132429  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:23.132435  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:23.132492  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:23.162454  836363 cri.go:89] found id: ""
	I1210 06:39:23.162494  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.162501  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:23.162507  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:23.162581  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:23.187219  836363 cri.go:89] found id: ""
	I1210 06:39:23.187233  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.187240  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:23.187245  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:23.187310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:23.212781  836363 cri.go:89] found id: ""
	I1210 06:39:23.212795  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.212802  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:23.212809  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:23.212821  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:23.269301  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:23.269321  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:23.286019  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:23.286034  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:23.349588  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:23.342068   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.342600   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344048   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344478   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.345899   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:23.342068   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.342600   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344048   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344478   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.345899   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:23.349598  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:23.349608  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:23.410637  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:23.410657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:25.946659  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:25.956427  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:25.956484  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:25.980198  836363 cri.go:89] found id: ""
	I1210 06:39:25.980212  836363 logs.go:282] 0 containers: []
	W1210 06:39:25.980219  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:25.980224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:25.980282  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:26.007385  836363 cri.go:89] found id: ""
	I1210 06:39:26.007400  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.007408  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:26.007413  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:26.007504  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:26.036729  836363 cri.go:89] found id: ""
	I1210 06:39:26.036743  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.036750  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:26.036755  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:26.036816  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:26.062224  836363 cri.go:89] found id: ""
	I1210 06:39:26.062238  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.062245  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:26.062250  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:26.062310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:26.087647  836363 cri.go:89] found id: ""
	I1210 06:39:26.087661  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.087668  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:26.087682  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:26.087742  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:26.111730  836363 cri.go:89] found id: ""
	I1210 06:39:26.111744  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.111751  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:26.111756  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:26.111815  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:26.140490  836363 cri.go:89] found id: ""
	I1210 06:39:26.140504  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.140511  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:26.140525  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:26.140534  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:26.196200  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:26.196219  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:26.212571  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:26.212587  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:26.273577  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:26.265176   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.265699   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267151   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267679   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.269363   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:26.265176   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.265699   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267151   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267679   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.269363   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:26.273590  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:26.273603  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:26.335078  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:26.335098  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:28.869553  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:28.880899  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:28.880964  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:28.906428  836363 cri.go:89] found id: ""
	I1210 06:39:28.906442  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.906449  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:28.906454  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:28.906544  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:28.931886  836363 cri.go:89] found id: ""
	I1210 06:39:28.931900  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.931908  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:28.931912  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:28.931973  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:28.961315  836363 cri.go:89] found id: ""
	I1210 06:39:28.961329  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.961336  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:28.961340  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:28.961401  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:28.986397  836363 cri.go:89] found id: ""
	I1210 06:39:28.986411  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.986419  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:28.986425  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:28.986507  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:29.012532  836363 cri.go:89] found id: ""
	I1210 06:39:29.012546  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.012554  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:29.012559  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:29.012617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:29.041722  836363 cri.go:89] found id: ""
	I1210 06:39:29.041736  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.041744  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:29.041749  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:29.041810  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:29.067638  836363 cri.go:89] found id: ""
	I1210 06:39:29.067652  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.067660  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:29.067675  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:29.067686  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:29.123932  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:29.123951  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:29.140346  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:29.140363  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:29.205033  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:29.196885   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.197511   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199079   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199683   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.201215   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:29.196885   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.197511   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199079   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199683   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.201215   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:29.205044  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:29.205056  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:29.268564  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:29.268592  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:31.797415  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:31.810439  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:31.810560  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:31.839718  836363 cri.go:89] found id: ""
	I1210 06:39:31.839731  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.839738  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:31.839743  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:31.839812  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:31.866887  836363 cri.go:89] found id: ""
	I1210 06:39:31.866901  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.866908  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:31.866913  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:31.866971  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:31.896088  836363 cri.go:89] found id: ""
	I1210 06:39:31.896102  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.896109  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:31.896114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:31.896183  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:31.920769  836363 cri.go:89] found id: ""
	I1210 06:39:31.920783  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.920790  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:31.920804  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:31.920870  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:31.944941  836363 cri.go:89] found id: ""
	I1210 06:39:31.944955  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.944973  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:31.944979  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:31.945062  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:31.969699  836363 cri.go:89] found id: ""
	I1210 06:39:31.969713  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.969719  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:31.969734  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:31.969796  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:31.994263  836363 cri.go:89] found id: ""
	I1210 06:39:31.994288  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.994296  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:31.994305  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:31.994315  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:32.051337  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:32.051358  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:32.068506  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:32.068524  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:32.133010  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:32.124121   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.124862   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.126702   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.127174   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.128721   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:32.124121   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.124862   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.126702   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.127174   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.128721   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:32.133022  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:32.133032  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:32.195411  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:32.195432  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:34.725830  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:34.736154  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:34.736227  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:34.760592  836363 cri.go:89] found id: ""
	I1210 06:39:34.760606  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.760613  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:34.760618  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:34.760679  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:34.789194  836363 cri.go:89] found id: ""
	I1210 06:39:34.789208  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.789215  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:34.789220  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:34.789290  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:34.821768  836363 cri.go:89] found id: ""
	I1210 06:39:34.821783  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.821798  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:34.821804  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:34.821862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:34.851156  836363 cri.go:89] found id: ""
	I1210 06:39:34.851182  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.851190  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:34.851195  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:34.851262  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:34.881339  836363 cri.go:89] found id: ""
	I1210 06:39:34.881353  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.881361  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:34.881366  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:34.881439  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:34.906857  836363 cri.go:89] found id: ""
	I1210 06:39:34.906871  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.906878  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:34.906884  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:34.906950  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:34.935793  836363 cri.go:89] found id: ""
	I1210 06:39:34.935807  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.935814  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:34.935822  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:34.935832  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:34.993322  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:34.993345  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:35.011292  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:35.011309  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:35.078043  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:35.069050   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070041   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070728   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.072495   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.073080   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:35.069050   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070041   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070728   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.072495   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.073080   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:35.078052  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:35.078063  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:35.146644  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:35.146671  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:37.678658  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:37.688848  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:37.688925  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:37.713621  836363 cri.go:89] found id: ""
	I1210 06:39:37.713635  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.713642  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:37.713647  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:37.713706  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:37.738638  836363 cri.go:89] found id: ""
	I1210 06:39:37.738651  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.738658  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:37.738663  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:37.738728  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:37.767364  836363 cri.go:89] found id: ""
	I1210 06:39:37.767378  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.767385  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:37.767390  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:37.767446  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:37.804827  836363 cri.go:89] found id: ""
	I1210 06:39:37.804841  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.804848  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:37.804854  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:37.804911  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:37.830424  836363 cri.go:89] found id: ""
	I1210 06:39:37.830438  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.830445  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:37.830449  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:37.830529  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:37.862851  836363 cri.go:89] found id: ""
	I1210 06:39:37.862864  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.862871  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:37.862876  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:37.862933  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:37.887629  836363 cri.go:89] found id: ""
	I1210 06:39:37.887643  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.887650  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:37.887686  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:37.887698  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:37.946033  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:37.946053  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:37.962951  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:37.962969  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:38.030263  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:38.021061   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.021797   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.022740   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.024684   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.025056   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:38.021061   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.021797   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.022740   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.024684   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.025056   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:38.030274  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:38.030285  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:38.093462  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:38.093482  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:40.622687  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:40.632840  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:40.632902  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:40.657235  836363 cri.go:89] found id: ""
	I1210 06:39:40.657248  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.657255  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:40.657261  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:40.657320  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:40.681835  836363 cri.go:89] found id: ""
	I1210 06:39:40.681849  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.681857  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:40.681862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:40.681919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:40.708085  836363 cri.go:89] found id: ""
	I1210 06:39:40.708099  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.708106  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:40.708111  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:40.708172  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:40.734852  836363 cri.go:89] found id: ""
	I1210 06:39:40.734867  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.734874  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:40.734879  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:40.734937  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:40.760765  836363 cri.go:89] found id: ""
	I1210 06:39:40.760779  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.760786  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:40.760791  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:40.760862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:40.785777  836363 cri.go:89] found id: ""
	I1210 06:39:40.785791  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.785797  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:40.785802  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:40.785862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:40.812943  836363 cri.go:89] found id: ""
	I1210 06:39:40.812957  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.812963  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:40.812971  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:40.812981  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:40.882713  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:40.874213   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.874907   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876393   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876781   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.878311   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:40.874213   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.874907   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876393   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876781   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.878311   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:40.882724  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:40.882746  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:40.946502  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:40.946522  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:40.973695  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:40.973711  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:41.028086  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:41.028105  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:43.544743  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:43.554582  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:43.554639  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:43.578394  836363 cri.go:89] found id: ""
	I1210 06:39:43.578408  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.578415  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:43.578421  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:43.578501  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:43.602120  836363 cri.go:89] found id: ""
	I1210 06:39:43.602134  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.602141  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:43.602152  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:43.602211  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:43.626641  836363 cri.go:89] found id: ""
	I1210 06:39:43.626655  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.626662  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:43.626666  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:43.626730  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:43.650792  836363 cri.go:89] found id: ""
	I1210 06:39:43.650805  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.650812  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:43.650817  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:43.650875  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:43.676181  836363 cri.go:89] found id: ""
	I1210 06:39:43.676195  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.676201  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:43.676207  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:43.676264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:43.700288  836363 cri.go:89] found id: ""
	I1210 06:39:43.700301  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.700308  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:43.700317  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:43.700376  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:43.723140  836363 cri.go:89] found id: ""
	I1210 06:39:43.723154  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.723161  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:43.723169  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:43.723179  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:43.777323  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:43.777344  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:43.793764  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:43.793781  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:43.876520  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:43.868105   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.868820   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870334   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870859   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.872328   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:43.868105   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.868820   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870334   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870859   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.872328   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:43.876531  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:43.876546  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:43.937962  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:43.937982  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:46.471232  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:46.481349  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:46.481414  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:46.505604  836363 cri.go:89] found id: ""
	I1210 06:39:46.505618  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.505625  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:46.505631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:46.505693  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:46.530584  836363 cri.go:89] found id: ""
	I1210 06:39:46.530598  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.530605  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:46.530610  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:46.530667  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:46.555675  836363 cri.go:89] found id: ""
	I1210 06:39:46.555689  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.555696  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:46.555701  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:46.555758  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:46.579225  836363 cri.go:89] found id: ""
	I1210 06:39:46.579240  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.579246  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:46.579252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:46.579309  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:46.603318  836363 cri.go:89] found id: ""
	I1210 06:39:46.603332  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.603339  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:46.603344  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:46.603400  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:46.628198  836363 cri.go:89] found id: ""
	I1210 06:39:46.628212  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.628219  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:46.628224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:46.628280  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:46.651425  836363 cri.go:89] found id: ""
	I1210 06:39:46.651439  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.651446  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:46.651454  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:46.651464  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:46.706345  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:46.706364  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:46.722718  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:46.722733  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:46.788441  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:46.780334   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.780989   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.782563   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.783115   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.784714   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:46.780334   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.780989   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.782563   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.783115   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.784714   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:46.788461  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:46.788474  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:46.856250  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:46.856269  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:49.385907  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:49.395772  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:49.395833  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:49.419273  836363 cri.go:89] found id: ""
	I1210 06:39:49.419286  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.419294  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:49.419299  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:49.419357  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:49.444546  836363 cri.go:89] found id: ""
	I1210 06:39:49.444560  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.444567  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:49.444572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:49.444634  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:49.469099  836363 cri.go:89] found id: ""
	I1210 06:39:49.469113  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.469120  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:49.469125  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:49.469182  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:49.497447  836363 cri.go:89] found id: ""
	I1210 06:39:49.497461  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.497468  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:49.497473  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:49.497531  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:49.521614  836363 cri.go:89] found id: ""
	I1210 06:39:49.521628  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.521635  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:49.521640  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:49.521700  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:49.546324  836363 cri.go:89] found id: ""
	I1210 06:39:49.546338  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.546345  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:49.546351  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:49.546408  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:49.569503  836363 cri.go:89] found id: ""
	I1210 06:39:49.569516  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.569523  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:49.569531  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:49.569541  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:49.625182  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:49.625201  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:49.641754  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:49.641772  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:49.705447  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:49.697491   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.698134   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.699780   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.700234   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.701724   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:49.697491   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.698134   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.699780   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.700234   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.701724   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:49.705457  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:49.705478  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:49.766615  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:49.766634  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:52.302628  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:52.312769  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:52.312832  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:52.338228  836363 cri.go:89] found id: ""
	I1210 06:39:52.338242  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.338249  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:52.338254  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:52.338315  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:52.363997  836363 cri.go:89] found id: ""
	I1210 06:39:52.364011  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.364018  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:52.364024  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:52.364083  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:52.389867  836363 cri.go:89] found id: ""
	I1210 06:39:52.389881  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.389888  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:52.389894  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:52.389959  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:52.416171  836363 cri.go:89] found id: ""
	I1210 06:39:52.416186  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.416193  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:52.416199  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:52.416262  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:52.440036  836363 cri.go:89] found id: ""
	I1210 06:39:52.440051  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.440058  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:52.440064  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:52.440127  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:52.465173  836363 cri.go:89] found id: ""
	I1210 06:39:52.465188  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.465195  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:52.465200  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:52.465266  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:52.490275  836363 cri.go:89] found id: ""
	I1210 06:39:52.490289  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.490296  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:52.490304  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:52.490316  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:52.507524  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:52.507541  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:52.572947  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:52.565302   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.565716   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567214   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567524   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.569003   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:52.565302   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.565716   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567214   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567524   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.569003   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:52.572957  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:52.572967  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:52.639898  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:52.639920  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:52.671836  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:52.671853  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:55.228555  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:55.238632  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:55.238692  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:55.262819  836363 cri.go:89] found id: ""
	I1210 06:39:55.262833  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.262840  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:55.262845  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:55.262903  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:55.287262  836363 cri.go:89] found id: ""
	I1210 06:39:55.287276  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.287282  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:55.287287  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:55.287347  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:55.312064  836363 cri.go:89] found id: ""
	I1210 06:39:55.312077  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.312084  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:55.312089  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:55.312147  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:55.340546  836363 cri.go:89] found id: ""
	I1210 06:39:55.340560  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.340566  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:55.340572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:55.340638  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:55.369203  836363 cri.go:89] found id: ""
	I1210 06:39:55.369217  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.369224  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:55.369229  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:55.369294  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:55.394186  836363 cri.go:89] found id: ""
	I1210 06:39:55.394200  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.394213  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:55.394218  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:55.394275  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:55.418250  836363 cri.go:89] found id: ""
	I1210 06:39:55.418264  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.418271  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:55.418279  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:55.418293  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:55.449481  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:55.449497  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:55.505651  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:55.505670  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:55.522722  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:55.522739  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:55.595372  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:55.580192   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.580773   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.588978   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.589842   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.591512   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:55.580192   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.580773   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.588978   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.589842   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.591512   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:55.595383  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:55.595396  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:58.156956  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:58.167095  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:58.167157  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:58.191075  836363 cri.go:89] found id: ""
	I1210 06:39:58.191089  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.191096  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:58.191101  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:58.191161  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:58.219145  836363 cri.go:89] found id: ""
	I1210 06:39:58.219159  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.219166  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:58.219171  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:58.219230  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:58.243820  836363 cri.go:89] found id: ""
	I1210 06:39:58.243834  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.243841  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:58.243846  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:58.243903  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:58.273220  836363 cri.go:89] found id: ""
	I1210 06:39:58.273234  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.273241  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:58.273246  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:58.273306  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:58.296744  836363 cri.go:89] found id: ""
	I1210 06:39:58.296758  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.296765  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:58.296770  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:58.296826  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:58.321374  836363 cri.go:89] found id: ""
	I1210 06:39:58.321389  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.321395  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:58.321401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:58.321460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:58.345587  836363 cri.go:89] found id: ""
	I1210 06:39:58.345601  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.345607  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:58.345615  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:58.345626  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:58.363238  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:58.363255  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:58.430409  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:58.422109   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.422784   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.424524   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.425019   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.426627   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:58.422109   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.422784   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.424524   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.425019   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.426627   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:58.430420  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:58.430439  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:58.492984  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:58.493002  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:58.520139  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:58.520155  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:01.076701  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:01.088176  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:01.088237  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:01.115625  836363 cri.go:89] found id: ""
	I1210 06:40:01.115641  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.115648  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:01.115653  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:01.115713  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:01.142756  836363 cri.go:89] found id: ""
	I1210 06:40:01.142771  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.142779  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:01.142784  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:01.142854  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:01.174021  836363 cri.go:89] found id: ""
	I1210 06:40:01.174036  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.174043  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:01.174048  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:01.174115  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:01.200639  836363 cri.go:89] found id: ""
	I1210 06:40:01.200654  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.200661  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:01.200667  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:01.200729  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:01.225759  836363 cri.go:89] found id: ""
	I1210 06:40:01.225772  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.225779  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:01.225785  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:01.225851  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:01.250911  836363 cri.go:89] found id: ""
	I1210 06:40:01.250926  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.250934  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:01.250940  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:01.251003  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:01.279325  836363 cri.go:89] found id: ""
	I1210 06:40:01.279339  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.279347  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:01.279355  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:01.279366  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:01.335352  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:01.335371  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:01.352578  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:01.352596  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:01.422752  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:01.414308   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.415520   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417210   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417554   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.418810   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:01.414308   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.415520   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417210   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417554   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.418810   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:01.422763  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:01.422778  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:01.484637  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:01.484658  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:04.016723  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:04.027134  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:04.027199  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:04.058110  836363 cri.go:89] found id: ""
	I1210 06:40:04.058123  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.058131  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:04.058136  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:04.058194  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:04.085839  836363 cri.go:89] found id: ""
	I1210 06:40:04.085853  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.085859  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:04.085874  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:04.085938  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:04.112846  836363 cri.go:89] found id: ""
	I1210 06:40:04.112870  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.112877  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:04.112884  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:04.112952  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:04.144605  836363 cri.go:89] found id: ""
	I1210 06:40:04.144619  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.144626  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:04.144631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:04.144698  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:04.170078  836363 cri.go:89] found id: ""
	I1210 06:40:04.170093  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.170111  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:04.170116  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:04.170187  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:04.195493  836363 cri.go:89] found id: ""
	I1210 06:40:04.195560  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.195568  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:04.195573  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:04.195663  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:04.224488  836363 cri.go:89] found id: ""
	I1210 06:40:04.224502  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.224509  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:04.224518  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:04.224528  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:04.280631  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:04.280651  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:04.297645  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:04.297663  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:04.366830  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:04.356738   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.357139   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.358834   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.359561   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.361366   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:04.356738   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.357139   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.358834   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.359561   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.361366   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:04.366842  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:04.366854  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:04.430241  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:04.430260  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:06.963156  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:06.973415  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:06.973480  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:06.997210  836363 cri.go:89] found id: ""
	I1210 06:40:06.997223  836363 logs.go:282] 0 containers: []
	W1210 06:40:06.997230  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:06.997235  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:06.997292  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:07.024360  836363 cri.go:89] found id: ""
	I1210 06:40:07.024374  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.024381  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:07.024386  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:07.024443  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:07.056844  836363 cri.go:89] found id: ""
	I1210 06:40:07.056857  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.056864  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:07.056869  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:07.056926  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:07.095983  836363 cri.go:89] found id: ""
	I1210 06:40:07.095997  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.096004  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:07.096010  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:07.096080  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:07.126932  836363 cri.go:89] found id: ""
	I1210 06:40:07.126947  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.126954  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:07.126958  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:07.127020  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:07.151807  836363 cri.go:89] found id: ""
	I1210 06:40:07.151823  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.151831  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:07.151835  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:07.151895  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:07.175459  836363 cri.go:89] found id: ""
	I1210 06:40:07.175473  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.175480  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:07.175489  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:07.175499  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:07.229963  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:07.229984  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:07.249632  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:07.249654  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:07.314011  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:07.306140   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.306891   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308497   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308812   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.310262   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:07.306140   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.306891   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308497   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308812   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.310262   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:07.314022  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:07.314034  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:07.376148  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:07.376173  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:09.907917  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:09.918267  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:09.918339  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:09.946634  836363 cri.go:89] found id: ""
	I1210 06:40:09.946648  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.946654  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:09.946660  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:09.946729  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:09.971532  836363 cri.go:89] found id: ""
	I1210 06:40:09.971546  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.971553  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:09.971558  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:09.971633  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:09.995748  836363 cri.go:89] found id: ""
	I1210 06:40:09.995762  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.995768  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:09.995773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:09.995832  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:10.026807  836363 cri.go:89] found id: ""
	I1210 06:40:10.026821  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.026828  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:10.026834  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:10.026902  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:10.060800  836363 cri.go:89] found id: ""
	I1210 06:40:10.060815  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.060822  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:10.060831  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:10.060896  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:10.092175  836363 cri.go:89] found id: ""
	I1210 06:40:10.092190  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.092200  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:10.092205  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:10.092267  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:10.121165  836363 cri.go:89] found id: ""
	I1210 06:40:10.121179  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.121187  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:10.121197  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:10.121208  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:10.137742  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:10.137761  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:10.202959  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:10.194457   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.195204   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.196954   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.197466   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.199061   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:10.194457   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.195204   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.196954   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.197466   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.199061   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:10.202970  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:10.202993  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:10.263838  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:10.263860  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:10.290431  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:10.290450  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:12.845609  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:12.856045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:12.856108  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:12.881725  836363 cri.go:89] found id: ""
	I1210 06:40:12.881740  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.881756  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:12.881762  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:12.881836  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:12.905554  836363 cri.go:89] found id: ""
	I1210 06:40:12.905568  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.905575  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:12.905580  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:12.905636  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:12.929343  836363 cri.go:89] found id: ""
	I1210 06:40:12.929357  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.929363  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:12.929369  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:12.929427  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:12.958063  836363 cri.go:89] found id: ""
	I1210 06:40:12.958077  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.958083  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:12.958089  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:12.958153  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:12.982226  836363 cri.go:89] found id: ""
	I1210 06:40:12.982240  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.982247  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:12.982252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:12.982309  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:13.008275  836363 cri.go:89] found id: ""
	I1210 06:40:13.008296  836363 logs.go:282] 0 containers: []
	W1210 06:40:13.008304  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:13.008309  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:13.008376  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:13.032141  836363 cri.go:89] found id: ""
	I1210 06:40:13.032155  836363 logs.go:282] 0 containers: []
	W1210 06:40:13.032161  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:13.032169  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:13.032180  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:13.094529  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:13.094550  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:13.112774  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:13.112794  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:13.177133  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:13.169073   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.169669   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171355   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171773   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.173232   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:13.169073   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.169669   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171355   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171773   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.173232   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:13.177142  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:13.177157  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:13.237784  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:13.237804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:15.773100  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:15.783808  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:15.783870  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:15.808779  836363 cri.go:89] found id: ""
	I1210 06:40:15.808792  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.808799  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:15.808811  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:15.808873  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:15.835122  836363 cri.go:89] found id: ""
	I1210 06:40:15.835136  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.835143  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:15.835147  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:15.835205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:15.859608  836363 cri.go:89] found id: ""
	I1210 06:40:15.859622  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.859630  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:15.859635  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:15.859698  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:15.884617  836363 cri.go:89] found id: ""
	I1210 06:40:15.884631  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.884637  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:15.884648  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:15.884708  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:15.917645  836363 cri.go:89] found id: ""
	I1210 06:40:15.917659  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.917666  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:15.917671  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:15.917738  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:15.942216  836363 cri.go:89] found id: ""
	I1210 06:40:15.942230  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.942237  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:15.942246  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:15.942306  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:15.969023  836363 cri.go:89] found id: ""
	I1210 06:40:15.969038  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.969045  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:15.969053  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:15.969065  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:16.025303  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:16.025322  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:16.043036  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:16.043055  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:16.124792  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:16.116191   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.116732   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118445   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118910   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.120594   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:16.116191   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.116732   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118445   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118910   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.120594   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:16.124803  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:16.124829  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:16.187018  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:16.187038  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:18.721268  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:18.732117  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:18.732179  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:18.759703  836363 cri.go:89] found id: ""
	I1210 06:40:18.759717  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.759724  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:18.759729  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:18.759803  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:18.785469  836363 cri.go:89] found id: ""
	I1210 06:40:18.785482  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.785492  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:18.785497  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:18.785556  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:18.809013  836363 cri.go:89] found id: ""
	I1210 06:40:18.809026  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.809033  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:18.809038  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:18.809100  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:18.837693  836363 cri.go:89] found id: ""
	I1210 06:40:18.837707  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.837714  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:18.837719  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:18.837777  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:18.862280  836363 cri.go:89] found id: ""
	I1210 06:40:18.862294  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.862300  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:18.862306  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:18.862366  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:18.887552  836363 cri.go:89] found id: ""
	I1210 06:40:18.887566  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.887573  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:18.887578  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:18.887644  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:18.912972  836363 cri.go:89] found id: ""
	I1210 06:40:18.912987  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.912994  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:18.913002  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:18.913020  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:18.968777  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:18.968818  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:18.987249  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:18.987267  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:19.053510  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:19.044510   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.045135   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047145   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047494   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.049061   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:19.044510   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.045135   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047145   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047494   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.049061   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:19.053536  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:19.053548  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:19.127699  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:19.127719  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:21.655771  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:21.665930  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:21.665996  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:21.690403  836363 cri.go:89] found id: ""
	I1210 06:40:21.690417  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.690424  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:21.690429  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:21.690526  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:21.716021  836363 cri.go:89] found id: ""
	I1210 06:40:21.716035  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.716042  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:21.716047  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:21.716110  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:21.740524  836363 cri.go:89] found id: ""
	I1210 06:40:21.740538  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.740545  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:21.740551  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:21.740610  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:21.764686  836363 cri.go:89] found id: ""
	I1210 06:40:21.764699  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.764706  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:21.764711  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:21.764768  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:21.789476  836363 cri.go:89] found id: ""
	I1210 06:40:21.789490  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.789497  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:21.789502  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:21.789567  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:21.815957  836363 cri.go:89] found id: ""
	I1210 06:40:21.815973  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.815981  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:21.815986  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:21.816046  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:21.844568  836363 cri.go:89] found id: ""
	I1210 06:40:21.844582  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.844589  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:21.844597  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:21.844607  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:21.900940  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:21.900960  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:21.919059  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:21.919078  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:21.988088  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:21.979038   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.979792   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981425   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981947   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.983526   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:21.979038   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.979792   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981425   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981947   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.983526   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:21.988098  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:21.988109  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:22.051814  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:22.051834  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:24.585034  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:24.595723  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:24.595789  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:24.624873  836363 cri.go:89] found id: ""
	I1210 06:40:24.624888  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.624895  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:24.624900  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:24.624966  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:24.649543  836363 cri.go:89] found id: ""
	I1210 06:40:24.649557  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.649564  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:24.649570  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:24.649680  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:24.675056  836363 cri.go:89] found id: ""
	I1210 06:40:24.675080  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.675088  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:24.675093  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:24.675154  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:24.700453  836363 cri.go:89] found id: ""
	I1210 06:40:24.700466  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.700474  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:24.700479  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:24.700537  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:24.726867  836363 cri.go:89] found id: ""
	I1210 06:40:24.726881  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.726887  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:24.726893  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:24.726955  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:24.751980  836363 cri.go:89] found id: ""
	I1210 06:40:24.751994  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.752002  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:24.752007  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:24.752068  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:24.782328  836363 cri.go:89] found id: ""
	I1210 06:40:24.782342  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.782349  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:24.782357  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:24.782367  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:24.845411  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:24.845431  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:24.874554  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:24.874571  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:24.930797  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:24.930817  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:24.947891  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:24.947910  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:25.021562  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:25.011415   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.012701   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.013093   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.014987   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.015492   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:25.011415   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.012701   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.013093   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.014987   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.015492   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:27.522215  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:27.533345  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:27.533449  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:27.562516  836363 cri.go:89] found id: ""
	I1210 06:40:27.562529  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.562538  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:27.562543  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:27.562612  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:27.589053  836363 cri.go:89] found id: ""
	I1210 06:40:27.589081  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.589089  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:27.589098  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:27.589171  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:27.614058  836363 cri.go:89] found id: ""
	I1210 06:40:27.614072  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.614079  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:27.614084  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:27.614142  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:27.639274  836363 cri.go:89] found id: ""
	I1210 06:40:27.639288  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.639296  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:27.639310  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:27.639369  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:27.667535  836363 cri.go:89] found id: ""
	I1210 06:40:27.667549  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.667556  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:27.667561  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:27.667630  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:27.691075  836363 cri.go:89] found id: ""
	I1210 06:40:27.691090  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.691097  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:27.691102  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:27.691161  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:27.716129  836363 cri.go:89] found id: ""
	I1210 06:40:27.716142  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.716150  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:27.716157  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:27.716168  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:27.771440  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:27.771460  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:27.788230  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:27.788248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:27.854509  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:27.846532   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.847111   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.848660   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.849114   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.850650   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:27.846532   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.847111   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.848660   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.849114   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.850650   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:27.854521  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:27.854533  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:27.922148  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:27.922172  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:30.451005  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:30.461920  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:30.461982  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:30.489712  836363 cri.go:89] found id: ""
	I1210 06:40:30.489727  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.489734  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:30.489739  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:30.489800  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:30.513093  836363 cri.go:89] found id: ""
	I1210 06:40:30.513107  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.513114  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:30.513119  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:30.513196  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:30.539836  836363 cri.go:89] found id: ""
	I1210 06:40:30.539850  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.539857  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:30.539862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:30.539921  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:30.563675  836363 cri.go:89] found id: ""
	I1210 06:40:30.563689  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.563696  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:30.563701  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:30.563768  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:30.587925  836363 cri.go:89] found id: ""
	I1210 06:40:30.587939  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.587946  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:30.587951  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:30.588014  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:30.612003  836363 cri.go:89] found id: ""
	I1210 06:40:30.612018  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.612025  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:30.612031  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:30.612094  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:30.640838  836363 cri.go:89] found id: ""
	I1210 06:40:30.640853  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.640860  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:30.640868  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:30.640879  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:30.696168  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:30.696189  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:30.712444  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:30.712461  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:30.779602  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:30.771019   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.771655   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773324   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773861   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.775400   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:30.771019   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.771655   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773324   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773861   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.775400   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:30.779612  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:30.779623  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:30.840751  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:30.840772  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:33.372644  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:33.382802  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:33.382862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:33.407793  836363 cri.go:89] found id: ""
	I1210 06:40:33.407807  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.407815  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:33.407820  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:33.407877  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:33.430878  836363 cri.go:89] found id: ""
	I1210 06:40:33.430892  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.430899  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:33.430904  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:33.430960  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:33.454595  836363 cri.go:89] found id: ""
	I1210 06:40:33.454609  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.454616  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:33.454621  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:33.454678  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:33.479328  836363 cri.go:89] found id: ""
	I1210 06:40:33.479342  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.479349  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:33.479354  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:33.479416  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:33.503717  836363 cri.go:89] found id: ""
	I1210 06:40:33.503731  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.503744  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:33.503750  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:33.503811  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:33.527968  836363 cri.go:89] found id: ""
	I1210 06:40:33.527982  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.527989  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:33.527994  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:33.528076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:33.552452  836363 cri.go:89] found id: ""
	I1210 06:40:33.552465  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.552472  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:33.552480  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:33.552490  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:33.586111  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:33.586127  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:33.644722  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:33.644742  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:33.663073  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:33.663090  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:33.731033  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:33.723109   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.723789   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725296   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725727   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.727228   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:33.723109   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.723789   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725296   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725727   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.727228   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:33.731044  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:33.731060  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:36.294593  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:36.306076  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:36.306134  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:36.334361  836363 cri.go:89] found id: ""
	I1210 06:40:36.334376  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.334383  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:36.334388  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:36.334447  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:36.361890  836363 cri.go:89] found id: ""
	I1210 06:40:36.361904  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.361911  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:36.361916  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:36.361977  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:36.387023  836363 cri.go:89] found id: ""
	I1210 06:40:36.387037  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.387044  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:36.387050  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:36.387109  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:36.411981  836363 cri.go:89] found id: ""
	I1210 06:40:36.411995  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.412011  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:36.412016  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:36.412085  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:36.436105  836363 cri.go:89] found id: ""
	I1210 06:40:36.436119  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.436136  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:36.436142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:36.436215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:36.463709  836363 cri.go:89] found id: ""
	I1210 06:40:36.463724  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.463731  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:36.463737  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:36.463795  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:36.492482  836363 cri.go:89] found id: ""
	I1210 06:40:36.492496  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.492503  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:36.492512  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:36.492522  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:36.551191  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:36.551210  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:36.568166  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:36.568183  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:36.635783  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:36.627478   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.627875   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629429   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629764   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.631231   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:36.627478   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.627875   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629429   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629764   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.631231   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:36.635793  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:36.635806  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:36.706158  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:36.706182  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:39.240421  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:39.250806  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:39.250867  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:39.275350  836363 cri.go:89] found id: ""
	I1210 06:40:39.275363  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.275370  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:39.275375  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:39.275431  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:39.309499  836363 cri.go:89] found id: ""
	I1210 06:40:39.309515  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.309522  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:39.309527  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:39.309605  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:39.335376  836363 cri.go:89] found id: ""
	I1210 06:40:39.335390  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.335397  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:39.335401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:39.335460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:39.364171  836363 cri.go:89] found id: ""
	I1210 06:40:39.364185  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.364192  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:39.364197  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:39.364261  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:39.390366  836363 cri.go:89] found id: ""
	I1210 06:40:39.390381  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.390388  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:39.390393  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:39.390456  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:39.418420  836363 cri.go:89] found id: ""
	I1210 06:40:39.418434  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.418441  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:39.418448  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:39.418525  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:39.443654  836363 cri.go:89] found id: ""
	I1210 06:40:39.443667  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.443674  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:39.443683  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:39.443693  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:39.508605  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:39.508627  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:39.541642  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:39.541657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:39.598637  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:39.598658  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:39.614821  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:39.614837  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:39.681178  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:39.672580   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.673146   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.674858   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.675468   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.677195   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:39.672580   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.673146   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.674858   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.675468   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.677195   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:42.181674  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:42.194020  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:42.194088  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:42.223014  836363 cri.go:89] found id: ""
	I1210 06:40:42.223033  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.223041  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:42.223053  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:42.223128  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:42.250171  836363 cri.go:89] found id: ""
	I1210 06:40:42.250186  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.250193  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:42.250199  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:42.250267  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:42.276322  836363 cri.go:89] found id: ""
	I1210 06:40:42.276343  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.276350  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:42.276356  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:42.276417  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:42.312287  836363 cri.go:89] found id: ""
	I1210 06:40:42.312302  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.312309  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:42.312314  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:42.312379  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:42.339930  836363 cri.go:89] found id: ""
	I1210 06:40:42.339944  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.339951  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:42.339956  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:42.340014  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:42.367830  836363 cri.go:89] found id: ""
	I1210 06:40:42.367844  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.367851  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:42.367857  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:42.367919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:42.392070  836363 cri.go:89] found id: ""
	I1210 06:40:42.392084  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.392091  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:42.392099  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:42.392109  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:42.426049  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:42.426065  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:42.481003  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:42.481025  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:42.497786  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:42.497804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:42.565103  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:42.556363   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.556746   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558351   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558980   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.560866   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:42.556363   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.556746   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558351   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558980   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.560866   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:42.565114  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:42.565124  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:45.129131  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:45.143244  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:45.143317  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:45.185169  836363 cri.go:89] found id: ""
	I1210 06:40:45.185203  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.185235  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:45.185259  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:45.185400  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:45.232743  836363 cri.go:89] found id: ""
	I1210 06:40:45.232760  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.232767  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:45.232774  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:45.232857  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:45.264531  836363 cri.go:89] found id: ""
	I1210 06:40:45.264564  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.264573  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:45.264585  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:45.264652  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:45.304876  836363 cri.go:89] found id: ""
	I1210 06:40:45.304891  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.304898  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:45.304912  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:45.304975  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:45.332686  836363 cri.go:89] found id: ""
	I1210 06:40:45.332700  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.332707  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:45.332713  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:45.332772  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:45.361418  836363 cri.go:89] found id: ""
	I1210 06:40:45.361443  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.361454  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:45.361460  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:45.361549  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:45.389935  836363 cri.go:89] found id: ""
	I1210 06:40:45.389949  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.389955  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:45.389963  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:45.389973  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:45.446063  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:45.446081  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:45.463171  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:45.463188  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:45.529007  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:45.520759   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.521319   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.522920   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.523417   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.524918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:45.520759   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.521319   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.522920   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.523417   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.524918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:45.529017  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:45.529027  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:45.596607  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:45.596629  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.127693  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:48.138167  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:48.138229  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:48.163699  836363 cri.go:89] found id: ""
	I1210 06:40:48.163713  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.163720  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:48.163726  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:48.163788  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:48.187478  836363 cri.go:89] found id: ""
	I1210 06:40:48.187491  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.187498  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:48.187503  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:48.187571  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:48.210551  836363 cri.go:89] found id: ""
	I1210 06:40:48.210565  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.210572  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:48.210577  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:48.210635  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:48.234710  836363 cri.go:89] found id: ""
	I1210 06:40:48.234723  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.234730  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:48.234735  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:48.234792  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:48.257754  836363 cri.go:89] found id: ""
	I1210 06:40:48.257767  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.257774  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:48.257779  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:48.257837  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:48.281482  836363 cri.go:89] found id: ""
	I1210 06:40:48.281497  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.281503  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:48.281508  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:48.281571  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:48.321472  836363 cri.go:89] found id: ""
	I1210 06:40:48.321486  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.321493  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:48.321501  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:48.321519  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.353157  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:48.353176  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:48.414214  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:48.414234  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:48.431305  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:48.431324  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:48.504839  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:48.496885   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.497412   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499192   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499575   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.501075   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:48.496885   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.497412   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499192   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499575   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.501075   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:48.504849  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:48.504860  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:51.069620  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:51.080075  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:51.080142  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:51.110642  836363 cri.go:89] found id: ""
	I1210 06:40:51.110656  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.110663  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:51.110668  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:51.110735  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:51.135875  836363 cri.go:89] found id: ""
	I1210 06:40:51.135889  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.135897  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:51.135902  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:51.135969  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:51.160992  836363 cri.go:89] found id: ""
	I1210 06:40:51.161007  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.161014  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:51.161019  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:51.161079  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:51.190942  836363 cri.go:89] found id: ""
	I1210 06:40:51.190957  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.190964  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:51.190969  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:51.191028  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:51.214853  836363 cri.go:89] found id: ""
	I1210 06:40:51.214866  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.214873  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:51.214878  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:51.214934  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:51.238972  836363 cri.go:89] found id: ""
	I1210 06:40:51.238986  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.238993  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:51.238998  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:51.239056  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:51.263101  836363 cri.go:89] found id: ""
	I1210 06:40:51.263115  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.263122  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:51.263130  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:51.263147  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:51.334552  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:51.325962   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.326878   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328565   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328869   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.330403   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:51.325962   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.326878   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328565   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328869   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.330403   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:51.334562  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:51.334574  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:51.405170  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:51.405189  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:51.433244  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:51.433261  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:51.491472  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:51.491494  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:54.008401  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:54.019572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:54.019640  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:54.049412  836363 cri.go:89] found id: ""
	I1210 06:40:54.049427  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.049434  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:54.049439  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:54.049505  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:54.074298  836363 cri.go:89] found id: ""
	I1210 06:40:54.074313  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.074319  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:54.074324  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:54.074384  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:54.102940  836363 cri.go:89] found id: ""
	I1210 06:40:54.102954  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.102961  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:54.102966  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:54.103030  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:54.127504  836363 cri.go:89] found id: ""
	I1210 06:40:54.127543  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.127556  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:54.127561  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:54.127619  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:54.156807  836363 cri.go:89] found id: ""
	I1210 06:40:54.156822  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.156829  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:54.156833  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:54.156896  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:54.181320  836363 cri.go:89] found id: ""
	I1210 06:40:54.181335  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.181342  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:54.181348  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:54.181406  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:54.205593  836363 cri.go:89] found id: ""
	I1210 06:40:54.205605  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.205612  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:54.205620  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:54.205631  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:54.222285  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:54.222301  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:54.288392  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:54.279932   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.280608   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282205   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282786   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.284468   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:54.279932   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.280608   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282205   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282786   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.284468   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:54.288402  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:54.288423  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:54.357504  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:54.357523  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:54.391376  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:54.391394  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:56.947968  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:56.957769  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:56.957833  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:56.981684  836363 cri.go:89] found id: ""
	I1210 06:40:56.981698  836363 logs.go:282] 0 containers: []
	W1210 06:40:56.981704  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:56.981709  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:56.981773  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:57.008321  836363 cri.go:89] found id: ""
	I1210 06:40:57.008336  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.008344  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:57.008348  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:57.008409  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:57.033150  836363 cri.go:89] found id: ""
	I1210 06:40:57.033164  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.033171  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:57.033175  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:57.033234  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:57.061083  836363 cri.go:89] found id: ""
	I1210 06:40:57.061096  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.061103  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:57.061108  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:57.061167  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:57.084352  836363 cri.go:89] found id: ""
	I1210 06:40:57.084366  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.084372  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:57.084377  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:57.084432  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:57.108194  836363 cri.go:89] found id: ""
	I1210 06:40:57.108225  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.108239  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:57.108244  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:57.108315  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:57.136912  836363 cri.go:89] found id: ""
	I1210 06:40:57.136926  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.136935  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:57.136942  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:57.136953  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:57.198446  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:57.198510  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:57.225389  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:57.225406  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:57.283570  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:57.283589  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:57.301703  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:57.301727  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:57.380612  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:57.372663   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.373165   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.374676   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.375061   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.376625   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:57.372663   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.373165   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.374676   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.375061   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.376625   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:59.880952  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:59.891486  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:59.891569  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:59.915927  836363 cri.go:89] found id: ""
	I1210 06:40:59.915941  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.915947  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:59.915953  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:59.916013  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:59.944178  836363 cri.go:89] found id: ""
	I1210 06:40:59.944192  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.944200  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:59.944205  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:59.944264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:59.969112  836363 cri.go:89] found id: ""
	I1210 06:40:59.969126  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.969133  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:59.969138  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:59.969201  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:59.994908  836363 cri.go:89] found id: ""
	I1210 06:40:59.994922  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.994929  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:59.994934  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:59.994991  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:00.092005  836363 cri.go:89] found id: ""
	I1210 06:41:00.092022  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.092030  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:00.092036  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:00.092110  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:00.176527  836363 cri.go:89] found id: ""
	I1210 06:41:00.176549  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.176557  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:00.176563  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:00.176628  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:00.227381  836363 cri.go:89] found id: ""
	I1210 06:41:00.227398  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.227406  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:00.227414  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:00.227427  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:00.330232  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:00.330255  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:00.363949  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:00.363967  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:00.445659  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:00.436629   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.437562   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439318   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439706   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.441418   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:00.436629   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.437562   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439318   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439706   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.441418   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:00.445669  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:00.445681  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:00.509415  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:00.509440  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:03.043380  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:03.053715  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:03.053796  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:03.079434  836363 cri.go:89] found id: ""
	I1210 06:41:03.079449  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.079456  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:03.079462  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:03.079520  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:03.112748  836363 cri.go:89] found id: ""
	I1210 06:41:03.112761  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.112768  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:03.112773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:03.112831  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:03.137303  836363 cri.go:89] found id: ""
	I1210 06:41:03.137317  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.137324  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:03.137329  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:03.137390  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:03.162303  836363 cri.go:89] found id: ""
	I1210 06:41:03.162317  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.162324  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:03.162329  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:03.162387  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:03.186423  836363 cri.go:89] found id: ""
	I1210 06:41:03.186438  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.186445  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:03.186449  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:03.186542  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:03.215070  836363 cri.go:89] found id: ""
	I1210 06:41:03.215084  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.215091  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:03.215096  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:03.215154  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:03.238820  836363 cri.go:89] found id: ""
	I1210 06:41:03.238834  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.238841  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:03.238850  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:03.238861  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:03.293835  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:03.293853  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:03.312548  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:03.312565  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:03.381504  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:03.373169   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.373896   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.375591   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.376023   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.377455   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:03.373169   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.373896   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.375591   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.376023   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.377455   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:03.381514  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:03.381524  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:03.444806  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:03.444826  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:05.972428  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:05.982168  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:05.982226  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:06.011191  836363 cri.go:89] found id: ""
	I1210 06:41:06.011206  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.011214  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:06.011220  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:06.011295  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:06.038921  836363 cri.go:89] found id: ""
	I1210 06:41:06.038937  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.038944  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:06.038949  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:06.039011  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:06.063412  836363 cri.go:89] found id: ""
	I1210 06:41:06.063426  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.063433  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:06.063438  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:06.063497  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:06.087777  836363 cri.go:89] found id: ""
	I1210 06:41:06.087800  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.087807  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:06.087812  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:06.087881  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:06.112794  836363 cri.go:89] found id: ""
	I1210 06:41:06.112809  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.112815  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:06.112821  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:06.112877  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:06.137620  836363 cri.go:89] found id: ""
	I1210 06:41:06.137634  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.137641  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:06.137645  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:06.137702  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:06.164245  836363 cri.go:89] found id: ""
	I1210 06:41:06.164259  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.164266  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:06.164274  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:06.164331  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:06.219975  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:06.219994  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:06.236571  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:06.236596  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:06.309920  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:06.294848   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.295676   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.297523   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.298004   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.299656   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:06.294848   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.295676   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.297523   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.298004   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.299656   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:06.309934  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:06.309944  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:06.383624  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:06.383646  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:08.911581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:08.923631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:08.923713  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:08.950073  836363 cri.go:89] found id: ""
	I1210 06:41:08.950087  836363 logs.go:282] 0 containers: []
	W1210 06:41:08.950094  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:08.950100  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:08.950157  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:08.976323  836363 cri.go:89] found id: ""
	I1210 06:41:08.976337  836363 logs.go:282] 0 containers: []
	W1210 06:41:08.976345  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:08.976349  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:08.976409  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:09.001975  836363 cri.go:89] found id: ""
	I1210 06:41:09.001991  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.001998  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:09.002004  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:09.002076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:09.027223  836363 cri.go:89] found id: ""
	I1210 06:41:09.027237  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.027250  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:09.027256  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:09.027314  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:09.051870  836363 cri.go:89] found id: ""
	I1210 06:41:09.051884  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.051890  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:09.051896  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:09.051955  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:09.075643  836363 cri.go:89] found id: ""
	I1210 06:41:09.075658  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.075678  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:09.075684  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:09.075740  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:09.100390  836363 cri.go:89] found id: ""
	I1210 06:41:09.100404  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.100411  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:09.100419  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:09.100430  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:09.164481  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:09.156151   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.156953   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.158536   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.159001   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.160652   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:09.156151   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.156953   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.158536   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.159001   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.160652   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:09.164492  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:09.164502  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:09.228784  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:09.228804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:09.256846  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:09.256863  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:09.312682  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:09.312702  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:11.842135  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:11.852673  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:11.852735  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:11.877129  836363 cri.go:89] found id: ""
	I1210 06:41:11.877144  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.877151  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:11.877156  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:11.877215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:11.902763  836363 cri.go:89] found id: ""
	I1210 06:41:11.902777  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.902784  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:11.902789  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:11.902863  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:11.927125  836363 cri.go:89] found id: ""
	I1210 06:41:11.927139  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.927146  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:11.927150  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:11.927206  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:11.966123  836363 cri.go:89] found id: ""
	I1210 06:41:11.966137  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.966144  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:11.966149  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:11.966205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:11.990046  836363 cri.go:89] found id: ""
	I1210 06:41:11.990059  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.990067  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:11.990072  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:11.990132  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:12.015096  836363 cri.go:89] found id: ""
	I1210 06:41:12.015111  836363 logs.go:282] 0 containers: []
	W1210 06:41:12.015118  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:12.015124  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:12.015185  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:12.040883  836363 cri.go:89] found id: ""
	I1210 06:41:12.040897  836363 logs.go:282] 0 containers: []
	W1210 06:41:12.040905  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:12.040912  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:12.040923  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:12.067975  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:12.067991  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:12.124161  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:12.124181  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:12.141074  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:12.141090  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:12.204309  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:12.196503   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.197043   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.198523   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.199003   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.200458   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:12.196503   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.197043   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.198523   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.199003   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.200458   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:12.204325  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:12.204336  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:14.770164  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:14.781008  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:14.781070  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:14.810029  836363 cri.go:89] found id: ""
	I1210 06:41:14.810042  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.810051  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:14.810056  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:14.810115  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:14.834988  836363 cri.go:89] found id: ""
	I1210 06:41:14.835002  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.835009  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:14.835015  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:14.835076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:14.859273  836363 cri.go:89] found id: ""
	I1210 06:41:14.859287  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.859294  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:14.859299  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:14.859358  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:14.884024  836363 cri.go:89] found id: ""
	I1210 06:41:14.884038  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.884045  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:14.884051  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:14.884111  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:14.907573  836363 cri.go:89] found id: ""
	I1210 06:41:14.907587  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.907596  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:14.907601  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:14.907660  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:14.932198  836363 cri.go:89] found id: ""
	I1210 06:41:14.932212  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.932219  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:14.932225  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:14.932285  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:14.957047  836363 cri.go:89] found id: ""
	I1210 06:41:14.957062  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.957069  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:14.957077  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:14.957087  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:15.015819  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:15.015841  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:15.035356  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:15.035387  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:15.111422  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:15.102537   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.103449   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105131   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105642   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.107373   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:15.102537   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.103449   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105131   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105642   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.107373   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:15.111434  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:15.111446  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:15.173911  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:15.173930  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:17.707403  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:17.717581  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:17.717645  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:17.741545  836363 cri.go:89] found id: ""
	I1210 06:41:17.741559  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.741566  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:17.741572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:17.741630  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:17.766133  836363 cri.go:89] found id: ""
	I1210 06:41:17.766147  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.766154  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:17.766159  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:17.766213  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:17.790714  836363 cri.go:89] found id: ""
	I1210 06:41:17.790728  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.790735  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:17.790740  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:17.790795  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:17.814639  836363 cri.go:89] found id: ""
	I1210 06:41:17.814653  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.814660  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:17.814666  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:17.814721  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:17.839269  836363 cri.go:89] found id: ""
	I1210 06:41:17.839283  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.839290  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:17.839295  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:17.839353  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:17.864188  836363 cri.go:89] found id: ""
	I1210 06:41:17.864202  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.864209  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:17.864214  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:17.864273  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:17.889103  836363 cri.go:89] found id: ""
	I1210 06:41:17.889117  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.889124  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:17.889132  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:17.889142  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:17.945534  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:17.945553  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:17.962119  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:17.962136  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:18.031737  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:18.022190   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.023153   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.024970   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.025609   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.027479   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:18.022190   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.023153   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.024970   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.025609   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.027479   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:18.031747  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:18.031758  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:18.095025  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:18.095045  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:20.626616  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:20.637064  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:20.637135  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:20.661085  836363 cri.go:89] found id: ""
	I1210 06:41:20.661098  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.661105  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:20.661110  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:20.661170  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:20.686407  836363 cri.go:89] found id: ""
	I1210 06:41:20.686420  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.686427  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:20.686432  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:20.686519  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:20.710905  836363 cri.go:89] found id: ""
	I1210 06:41:20.710919  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.710926  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:20.710931  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:20.710989  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:20.735241  836363 cri.go:89] found id: ""
	I1210 06:41:20.735255  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.735262  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:20.735268  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:20.735326  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:20.762996  836363 cri.go:89] found id: ""
	I1210 06:41:20.763010  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.763017  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:20.763022  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:20.763080  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:20.793084  836363 cri.go:89] found id: ""
	I1210 06:41:20.793098  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.793105  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:20.793111  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:20.793167  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:20.821259  836363 cri.go:89] found id: ""
	I1210 06:41:20.821274  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.821281  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:20.821289  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:20.821300  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:20.876655  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:20.876676  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:20.894043  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:20.894060  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:20.967195  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:20.958394   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.959075   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.960382   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.961013   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.962652   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:20.958394   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.959075   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.960382   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.961013   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.962652   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:20.967206  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:20.967217  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:21.028930  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:21.028949  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:23.559672  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:23.572318  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:23.572395  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:23.603800  836363 cri.go:89] found id: ""
	I1210 06:41:23.603814  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.603821  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:23.603827  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:23.603900  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:23.634190  836363 cri.go:89] found id: ""
	I1210 06:41:23.634205  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.634212  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:23.634217  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:23.634277  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:23.664876  836363 cri.go:89] found id: ""
	I1210 06:41:23.664890  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.664898  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:23.664904  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:23.664974  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:23.693167  836363 cri.go:89] found id: ""
	I1210 06:41:23.693182  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.693189  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:23.693196  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:23.693264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:23.719371  836363 cri.go:89] found id: ""
	I1210 06:41:23.719385  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.719393  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:23.719398  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:23.719460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:23.745307  836363 cri.go:89] found id: ""
	I1210 06:41:23.745321  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.745328  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:23.745334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:23.745399  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:23.773016  836363 cri.go:89] found id: ""
	I1210 06:41:23.773031  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.773038  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:23.773046  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:23.773056  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:23.829249  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:23.829268  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:23.846743  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:23.846761  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:23.915363  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:23.907095   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.907839   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.909482   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.910101   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.911295   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:23.907095   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.907839   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.909482   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.910101   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.911295   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:23.915374  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:23.915385  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:23.977818  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:23.977838  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:26.512080  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:26.522967  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:26.523031  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:26.556941  836363 cri.go:89] found id: ""
	I1210 06:41:26.556955  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.556962  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:26.556967  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:26.557028  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:26.583709  836363 cri.go:89] found id: ""
	I1210 06:41:26.583723  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.583731  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:26.583737  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:26.583794  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:26.620398  836363 cri.go:89] found id: ""
	I1210 06:41:26.620411  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.620418  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:26.620424  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:26.620488  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:26.645205  836363 cri.go:89] found id: ""
	I1210 06:41:26.645220  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.645227  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:26.645232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:26.645295  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:26.672971  836363 cri.go:89] found id: ""
	I1210 06:41:26.672985  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.672992  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:26.672996  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:26.673054  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:26.701966  836363 cri.go:89] found id: ""
	I1210 06:41:26.701980  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.701987  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:26.701993  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:26.702051  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:26.726241  836363 cri.go:89] found id: ""
	I1210 06:41:26.726254  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.726261  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:26.726269  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:26.726280  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:26.782519  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:26.782539  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:26.799105  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:26.799127  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:26.869131  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:26.860787   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.861476   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863184   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863795   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.865363   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:26.860787   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.861476   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863184   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863795   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.865363   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:26.869141  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:26.869152  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:26.935169  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:26.935188  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:29.463208  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:29.473355  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:29.473417  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:29.497493  836363 cri.go:89] found id: ""
	I1210 06:41:29.497512  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.497519  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:29.497524  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:29.497584  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:29.525346  836363 cri.go:89] found id: ""
	I1210 06:41:29.525360  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.525366  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:29.525381  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:29.525485  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:29.553583  836363 cri.go:89] found id: ""
	I1210 06:41:29.553596  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.553604  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:29.553609  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:29.553665  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:29.587462  836363 cri.go:89] found id: ""
	I1210 06:41:29.587476  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.587483  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:29.587488  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:29.587559  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:29.625152  836363 cri.go:89] found id: ""
	I1210 06:41:29.625166  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.625173  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:29.625178  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:29.625235  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:29.649760  836363 cri.go:89] found id: ""
	I1210 06:41:29.649773  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.649781  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:29.649786  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:29.649843  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:29.674875  836363 cri.go:89] found id: ""
	I1210 06:41:29.674889  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.674897  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:29.674904  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:29.674916  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:29.691346  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:29.691363  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:29.753565  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:29.745153   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.745754   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.747557   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.748093   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.749766   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:29.745153   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.745754   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.747557   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.748093   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.749766   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:29.753580  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:29.753591  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:29.815732  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:29.815751  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:29.848125  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:29.848141  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:32.408296  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:32.419204  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:32.419279  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:32.445527  836363 cri.go:89] found id: ""
	I1210 06:41:32.445542  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.445548  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:32.445553  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:32.445611  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:32.470075  836363 cri.go:89] found id: ""
	I1210 06:41:32.470088  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.470095  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:32.470108  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:32.470164  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:32.494632  836363 cri.go:89] found id: ""
	I1210 06:41:32.494647  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.494654  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:32.494658  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:32.494732  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:32.522542  836363 cri.go:89] found id: ""
	I1210 06:41:32.522555  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.522568  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:32.522574  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:32.522641  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:32.557483  836363 cri.go:89] found id: ""
	I1210 06:41:32.557498  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.557505  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:32.557511  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:32.557570  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:32.586583  836363 cri.go:89] found id: ""
	I1210 06:41:32.586598  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.586605  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:32.586611  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:32.586673  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:32.614984  836363 cri.go:89] found id: ""
	I1210 06:41:32.614997  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.615004  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:32.615012  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:32.615023  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:32.677103  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:32.669262   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.669805   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671272   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671743   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.673216   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:32.669262   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.669805   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671272   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671743   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.673216   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:32.677113  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:32.677123  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:32.738003  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:32.738022  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:32.765472  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:32.765488  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:32.822384  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:32.822406  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:35.339259  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:35.349700  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:35.349758  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:35.375337  836363 cri.go:89] found id: ""
	I1210 06:41:35.375359  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.375366  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:35.375371  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:35.375449  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:35.399613  836363 cri.go:89] found id: ""
	I1210 06:41:35.399627  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.399634  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:35.399639  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:35.399696  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:35.423561  836363 cri.go:89] found id: ""
	I1210 06:41:35.423575  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.423582  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:35.423588  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:35.423650  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:35.448165  836363 cri.go:89] found id: ""
	I1210 06:41:35.448179  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.448186  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:35.448198  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:35.448256  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:35.476970  836363 cri.go:89] found id: ""
	I1210 06:41:35.476984  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.476992  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:35.476997  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:35.477062  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:35.500993  836363 cri.go:89] found id: ""
	I1210 06:41:35.501007  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.501024  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:35.501029  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:35.501087  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:35.530273  836363 cri.go:89] found id: ""
	I1210 06:41:35.530294  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.530301  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:35.530309  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:35.530320  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:35.588229  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:35.588248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:35.608295  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:35.608311  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:35.673227  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:35.664447   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.665198   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667057   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667693   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.669348   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:35.664447   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.665198   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667057   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667693   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.669348   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:35.673237  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:35.673248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:35.735230  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:35.735250  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:38.262657  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:38.273339  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:38.273403  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:38.298561  836363 cri.go:89] found id: ""
	I1210 06:41:38.298576  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.298583  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:38.298588  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:38.298647  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:38.323273  836363 cri.go:89] found id: ""
	I1210 06:41:38.323294  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.323301  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:38.323306  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:38.323369  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:38.348694  836363 cri.go:89] found id: ""
	I1210 06:41:38.348709  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.348716  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:38.348721  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:38.348777  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:38.374030  836363 cri.go:89] found id: ""
	I1210 06:41:38.374044  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.374052  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:38.374057  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:38.374116  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:38.399116  836363 cri.go:89] found id: ""
	I1210 06:41:38.399130  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.399137  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:38.399142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:38.399205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:38.431922  836363 cri.go:89] found id: ""
	I1210 06:41:38.431936  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.431943  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:38.431954  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:38.432015  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:38.456101  836363 cri.go:89] found id: ""
	I1210 06:41:38.456115  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.456122  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:38.456130  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:38.456140  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:38.511923  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:38.511943  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:38.528342  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:38.528360  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:38.608737  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:38.599653   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.600438   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.601979   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.602518   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.604301   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:38.599653   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.600438   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.601979   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.602518   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.604301   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:38.608759  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:38.608770  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:38.671052  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:38.671073  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:41.199012  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:41.208683  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:41.208748  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:41.232632  836363 cri.go:89] found id: ""
	I1210 06:41:41.232645  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.232652  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:41.232657  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:41.232718  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:41.255309  836363 cri.go:89] found id: ""
	I1210 06:41:41.255322  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.255329  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:41.255334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:41.255388  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:41.279539  836363 cri.go:89] found id: ""
	I1210 06:41:41.279553  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.279560  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:41.279565  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:41.279636  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:41.306855  836363 cri.go:89] found id: ""
	I1210 06:41:41.306870  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.306877  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:41.306882  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:41.306943  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:41.331914  836363 cri.go:89] found id: ""
	I1210 06:41:41.331927  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.331933  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:41.331938  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:41.331998  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:41.355926  836363 cri.go:89] found id: ""
	I1210 06:41:41.355940  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.355947  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:41.355952  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:41.356022  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:41.380191  836363 cri.go:89] found id: ""
	I1210 06:41:41.380205  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.380213  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:41.380221  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:41.380237  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:41.396613  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:41.396631  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:41.460969  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:41.452836   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.453418   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455027   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455521   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.457097   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:41.452836   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.453418   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455027   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455521   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.457097   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:41.460979  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:41.460991  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:41.522046  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:41.522066  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:41.556015  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:41.556032  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:44.133635  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:44.143661  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:44.143725  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:44.170247  836363 cri.go:89] found id: ""
	I1210 06:41:44.170262  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.170269  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:44.170274  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:44.170341  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:44.195020  836363 cri.go:89] found id: ""
	I1210 06:41:44.195034  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.195040  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:44.195045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:44.195101  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:44.219352  836363 cri.go:89] found id: ""
	I1210 06:41:44.219366  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.219373  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:44.219378  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:44.219435  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:44.247508  836363 cri.go:89] found id: ""
	I1210 06:41:44.247522  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.247529  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:44.247534  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:44.247593  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:44.271983  836363 cri.go:89] found id: ""
	I1210 06:41:44.271997  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.272004  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:44.272009  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:44.272066  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:44.295908  836363 cri.go:89] found id: ""
	I1210 06:41:44.295922  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.295928  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:44.295934  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:44.295993  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:44.324246  836363 cri.go:89] found id: ""
	I1210 06:41:44.324260  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.324266  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:44.324275  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:44.324285  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:44.387028  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:44.387048  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:44.415316  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:44.415332  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:44.471125  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:44.471146  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:44.487999  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:44.488017  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:44.555772  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:44.542191   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.542827   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545275   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545612   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.547112   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:44.542191   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.542827   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545275   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545612   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.547112   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:47.056814  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:47.066882  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:47.066943  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:47.091827  836363 cri.go:89] found id: ""
	I1210 06:41:47.091841  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.091848  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:47.091853  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:47.091910  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:47.115556  836363 cri.go:89] found id: ""
	I1210 06:41:47.115571  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.115578  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:47.115583  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:47.115640  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:47.140381  836363 cri.go:89] found id: ""
	I1210 06:41:47.140395  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.140402  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:47.140407  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:47.140466  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:47.164584  836363 cri.go:89] found id: ""
	I1210 06:41:47.164599  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.164606  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:47.164611  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:47.164669  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:47.188952  836363 cri.go:89] found id: ""
	I1210 06:41:47.188966  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.188973  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:47.188978  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:47.189036  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:47.215501  836363 cri.go:89] found id: ""
	I1210 06:41:47.215515  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.215522  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:47.215528  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:47.215594  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:47.248270  836363 cri.go:89] found id: ""
	I1210 06:41:47.248284  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.248291  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:47.248301  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:47.248312  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:47.264763  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:47.264780  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:47.328736  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:47.319556   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.320385   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.321992   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.322636   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.324430   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:47.319556   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.320385   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.321992   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.322636   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.324430   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:47.328762  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:47.328773  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:47.391108  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:47.391129  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:47.421573  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:47.421590  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:49.978044  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:49.988396  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:49.988461  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:50.019406  836363 cri.go:89] found id: ""
	I1210 06:41:50.019422  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.019430  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:50.019436  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:50.019525  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:50.046394  836363 cri.go:89] found id: ""
	I1210 06:41:50.046409  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.046416  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:50.046421  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:50.046513  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:50.073199  836363 cri.go:89] found id: ""
	I1210 06:41:50.073213  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.073220  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:50.073225  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:50.073287  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:50.099702  836363 cri.go:89] found id: ""
	I1210 06:41:50.099716  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.099722  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:50.099728  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:50.099787  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:50.128872  836363 cri.go:89] found id: ""
	I1210 06:41:50.128886  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.128893  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:50.128898  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:50.128956  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:50.153319  836363 cri.go:89] found id: ""
	I1210 06:41:50.153333  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.153340  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:50.153346  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:50.153404  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:50.180949  836363 cri.go:89] found id: ""
	I1210 06:41:50.180962  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.180968  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:50.180976  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:50.180986  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:50.242900  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:50.242922  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:50.273618  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:50.273634  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:50.328466  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:50.328485  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:50.344888  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:50.344905  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:50.410799  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:50.402194   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.403126   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.404850   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.405142   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.406780   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:50.402194   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.403126   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.404850   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.405142   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.406780   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:52.911683  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:52.922118  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:52.922186  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:52.947907  836363 cri.go:89] found id: ""
	I1210 06:41:52.947922  836363 logs.go:282] 0 containers: []
	W1210 06:41:52.947930  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:52.947935  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:52.948002  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:52.974796  836363 cri.go:89] found id: ""
	I1210 06:41:52.974812  836363 logs.go:282] 0 containers: []
	W1210 06:41:52.974820  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:52.974826  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:52.974885  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:53.005919  836363 cri.go:89] found id: ""
	I1210 06:41:53.005935  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.005942  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:53.005950  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:53.006027  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:53.033320  836363 cri.go:89] found id: ""
	I1210 06:41:53.033333  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.033340  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:53.033345  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:53.033405  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:53.061819  836363 cri.go:89] found id: ""
	I1210 06:41:53.061834  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.061851  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:53.061857  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:53.061924  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:53.086290  836363 cri.go:89] found id: ""
	I1210 06:41:53.086304  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.086311  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:53.086316  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:53.086374  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:53.111667  836363 cri.go:89] found id: ""
	I1210 06:41:53.111681  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.111697  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:53.111706  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:53.111716  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:53.168392  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:53.168412  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:53.185807  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:53.185823  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:53.254387  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:53.246258   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.246805   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248540   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248996   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.250535   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:53.246258   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.246805   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248540   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248996   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.250535   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:53.254397  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:53.254408  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:53.319043  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:53.319063  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:55.851295  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:55.861334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:55.861402  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:55.886929  836363 cri.go:89] found id: ""
	I1210 06:41:55.886949  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.886957  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:55.886962  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:55.887020  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:55.915116  836363 cri.go:89] found id: ""
	I1210 06:41:55.915130  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.915138  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:55.915142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:55.915200  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:55.939013  836363 cri.go:89] found id: ""
	I1210 06:41:55.939033  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.939040  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:55.939045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:55.939101  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:55.964369  836363 cri.go:89] found id: ""
	I1210 06:41:55.964383  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.964390  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:55.964395  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:55.964455  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:55.989465  836363 cri.go:89] found id: ""
	I1210 06:41:55.989478  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.989485  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:55.989491  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:55.989557  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:56.014203  836363 cri.go:89] found id: ""
	I1210 06:41:56.014218  836363 logs.go:282] 0 containers: []
	W1210 06:41:56.014225  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:56.014230  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:56.014336  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:56.043892  836363 cri.go:89] found id: ""
	I1210 06:41:56.043906  836363 logs.go:282] 0 containers: []
	W1210 06:41:56.043916  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:56.043925  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:56.043936  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:56.112761  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:56.104681   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.105226   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.106816   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.107362   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.108899   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:56.104681   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.105226   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.106816   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.107362   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.108899   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:56.112770  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:56.112781  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:56.174642  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:56.174662  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:56.202947  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:56.202963  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:56.259062  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:56.259082  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:58.776033  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:58.786675  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:58.786737  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:58.822543  836363 cri.go:89] found id: ""
	I1210 06:41:58.822557  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.822563  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:58.822572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:58.822634  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:58.848835  836363 cri.go:89] found id: ""
	I1210 06:41:58.848850  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.848857  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:58.848862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:58.848919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:58.876530  836363 cri.go:89] found id: ""
	I1210 06:41:58.876544  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.876551  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:58.876556  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:58.876615  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:58.901700  836363 cri.go:89] found id: ""
	I1210 06:41:58.901714  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.901728  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:58.901733  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:58.901791  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:58.928495  836363 cri.go:89] found id: ""
	I1210 06:41:58.928509  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.928515  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:58.928520  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:58.928577  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:58.952415  836363 cri.go:89] found id: ""
	I1210 06:41:58.952428  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.952435  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:58.952440  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:58.952496  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:58.981756  836363 cri.go:89] found id: ""
	I1210 06:41:58.981771  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.981788  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:58.981797  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:58.981809  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:59.049361  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:59.041702   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.042693   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.043691   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.044312   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.045540   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:59.041702   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.042693   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.043691   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.044312   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.045540   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:59.049372  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:59.049382  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:59.111079  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:59.111098  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:59.141459  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:59.141474  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:59.199670  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:59.199691  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:01.716854  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:01.728404  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:01.728475  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:01.756029  836363 cri.go:89] found id: ""
	I1210 06:42:01.756042  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.756049  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:01.756054  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:01.756109  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:01.780969  836363 cri.go:89] found id: ""
	I1210 06:42:01.780983  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.780990  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:01.780995  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:01.781055  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:01.820198  836363 cri.go:89] found id: ""
	I1210 06:42:01.820212  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.820219  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:01.820224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:01.820284  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:01.848531  836363 cri.go:89] found id: ""
	I1210 06:42:01.848546  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.848553  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:01.848558  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:01.848617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:01.878420  836363 cri.go:89] found id: ""
	I1210 06:42:01.878433  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.878441  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:01.878448  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:01.878534  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:01.905311  836363 cri.go:89] found id: ""
	I1210 06:42:01.905325  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.905344  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:01.905350  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:01.905421  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:01.929912  836363 cri.go:89] found id: ""
	I1210 06:42:01.929926  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.929944  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:01.929953  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:01.929963  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:01.985928  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:01.985948  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:02.003638  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:02.003657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:02.075789  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:02.068334   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.068871   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070020   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070533   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.071984   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:02.068334   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.068871   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070020   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070533   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.071984   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:02.075800  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:02.075810  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:02.136779  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:02.136798  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:04.664122  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:04.675095  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:04.675159  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:04.699777  836363 cri.go:89] found id: ""
	I1210 06:42:04.699800  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.699808  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:04.699814  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:04.699911  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:04.724439  836363 cri.go:89] found id: ""
	I1210 06:42:04.724461  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.724468  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:04.724473  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:04.724538  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:04.750165  836363 cri.go:89] found id: ""
	I1210 06:42:04.750179  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.750187  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:04.750192  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:04.750260  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:04.775655  836363 cri.go:89] found id: ""
	I1210 06:42:04.775669  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.775676  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:04.775681  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:04.775740  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:04.805746  836363 cri.go:89] found id: ""
	I1210 06:42:04.805759  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.805776  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:04.805782  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:04.805849  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:04.836239  836363 cri.go:89] found id: ""
	I1210 06:42:04.836261  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.836269  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:04.836275  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:04.836344  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:04.862854  836363 cri.go:89] found id: ""
	I1210 06:42:04.862868  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.862875  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:04.862883  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:04.862893  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:04.922415  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:04.922435  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:04.939187  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:04.939203  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:05.006750  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:04.996413   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.996887   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998439   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998946   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:05.000828   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:04.996413   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.996887   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998439   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998946   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:05.000828   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:05.006762  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:05.006773  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:05.070511  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:05.070533  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:07.606355  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:07.617096  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:07.617156  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:07.642031  836363 cri.go:89] found id: ""
	I1210 06:42:07.642047  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.642054  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:07.642060  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:07.642117  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:07.670075  836363 cri.go:89] found id: ""
	I1210 06:42:07.670089  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.670107  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:07.670114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:07.670174  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:07.695503  836363 cri.go:89] found id: ""
	I1210 06:42:07.695517  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.695534  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:07.695539  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:07.695613  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:07.719792  836363 cri.go:89] found id: ""
	I1210 06:42:07.719805  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.719813  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:07.719818  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:07.719875  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:07.742885  836363 cri.go:89] found id: ""
	I1210 06:42:07.742899  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.742906  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:07.742911  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:07.742972  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:07.766658  836363 cri.go:89] found id: ""
	I1210 06:42:07.766672  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.766679  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:07.766684  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:07.766742  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:07.790890  836363 cri.go:89] found id: ""
	I1210 06:42:07.790917  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.790924  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:07.790932  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:07.790943  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:07.832030  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:07.832053  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:07.897794  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:07.897815  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:07.914747  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:07.914765  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:07.985400  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:07.977663   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.978174   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.979730   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.980136   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.981623   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:07.977663   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.978174   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.979730   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.980136   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.981623   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:07.985411  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:07.985422  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:10.549627  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:10.559818  836363 kubeadm.go:602] duration metric: took 4m3.540459063s to restartPrimaryControlPlane
	W1210 06:42:10.559885  836363 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:42:10.559961  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:42:10.971123  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:10.985022  836363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:42:10.992941  836363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:42:10.992994  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:42:11.001748  836363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:42:11.001760  836363 kubeadm.go:158] found existing configuration files:
	
	I1210 06:42:11.001824  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:42:11.011668  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:42:11.011736  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:42:11.019850  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:42:11.027722  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:42:11.027783  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:42:11.035605  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:42:11.043216  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:42:11.043273  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:42:11.050854  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:42:11.058765  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:42:11.058844  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:42:11.066934  836363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:42:11.105523  836363 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:42:11.105575  836363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:42:11.188151  836363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:42:11.188218  836363 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:42:11.188255  836363 kubeadm.go:319] OS: Linux
	I1210 06:42:11.188304  836363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:42:11.188354  836363 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:42:11.188398  836363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:42:11.188448  836363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:42:11.188493  836363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:42:11.188543  836363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:42:11.188590  836363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:42:11.188634  836363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:42:11.188683  836363 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:42:11.250124  836363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:42:11.250230  836363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:42:11.250322  836363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:42:11.255308  836363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:42:11.258775  836363 out.go:252]   - Generating certificates and keys ...
	I1210 06:42:11.258873  836363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:42:11.258950  836363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:42:11.259045  836363 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:42:11.259113  836363 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:42:11.259184  836363 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:42:11.259237  836363 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:42:11.259299  836363 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:42:11.259360  836363 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:42:11.259435  836363 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:42:11.259512  836363 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:42:11.259731  836363 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:42:11.259789  836363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:42:12.423232  836363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:42:12.577934  836363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:42:12.783953  836363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:42:13.093269  836363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:42:13.330460  836363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:42:13.331164  836363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:42:13.333749  836363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:42:13.336840  836363 out.go:252]   - Booting up control plane ...
	I1210 06:42:13.336937  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:42:13.337013  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:42:13.337083  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:42:13.358981  836363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:42:13.359103  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:42:13.368350  836363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:42:13.369623  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:42:13.370235  836363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:42:13.505873  836363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:42:13.506077  836363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:46:13.506731  836363 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00070392s
	I1210 06:46:13.506763  836363 kubeadm.go:319] 
	I1210 06:46:13.506850  836363 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:46:13.506894  836363 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:46:13.506999  836363 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:46:13.507005  836363 kubeadm.go:319] 
	I1210 06:46:13.507125  836363 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:46:13.507158  836363 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:46:13.507196  836363 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:46:13.507200  836363 kubeadm.go:319] 
	I1210 06:46:13.511687  836363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:46:13.512136  836363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:46:13.512245  836363 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:46:13.512495  836363 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:46:13.512501  836363 kubeadm.go:319] 
	I1210 06:46:13.512574  836363 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:46:13.512709  836363 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00070392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:46:13.512792  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:46:13.924248  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:46:13.937517  836363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:46:13.937579  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:46:13.945462  836363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:46:13.945471  836363 kubeadm.go:158] found existing configuration files:
	
	I1210 06:46:13.945523  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:46:13.953499  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:46:13.953555  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:46:13.961232  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:46:13.969190  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:46:13.969248  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:46:13.976966  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:46:13.984824  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:46:13.984878  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:46:13.992414  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:46:14.002049  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:46:14.002141  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:46:14.011865  836363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:46:14.052323  836363 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:46:14.052372  836363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:46:14.126225  836363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:46:14.126291  836363 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:46:14.126325  836363 kubeadm.go:319] OS: Linux
	I1210 06:46:14.126369  836363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:46:14.126415  836363 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:46:14.126482  836363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:46:14.126530  836363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:46:14.126577  836363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:46:14.126624  836363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:46:14.126668  836363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:46:14.126716  836363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:46:14.126761  836363 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:46:14.195770  836363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:46:14.195873  836363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:46:14.195962  836363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:46:14.202979  836363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:46:14.208298  836363 out.go:252]   - Generating certificates and keys ...
	I1210 06:46:14.208399  836363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:46:14.208478  836363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:46:14.208559  836363 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:46:14.208622  836363 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:46:14.208696  836363 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:46:14.208754  836363 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:46:14.208821  836363 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:46:14.208886  836363 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:46:14.208964  836363 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:46:14.209040  836363 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:46:14.209080  836363 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:46:14.209138  836363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:46:14.596166  836363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:46:14.891862  836363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:46:14.944957  836363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:46:15.236183  836363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:46:15.354206  836363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:46:15.354795  836363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:46:15.357335  836363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:46:15.360719  836363 out.go:252]   - Booting up control plane ...
	I1210 06:46:15.360814  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:46:15.360889  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:46:15.360954  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:46:15.381031  836363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:46:15.381140  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:46:15.389841  836363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:46:15.391023  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:46:15.391179  836363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:46:15.526794  836363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:46:15.526907  836363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:50:15.527073  836363 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000371584s
	I1210 06:50:15.527097  836363 kubeadm.go:319] 
	I1210 06:50:15.527182  836363 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:50:15.527235  836363 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:50:15.527340  836363 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:50:15.527347  836363 kubeadm.go:319] 
	I1210 06:50:15.527451  836363 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:50:15.527482  836363 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:50:15.527512  836363 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:50:15.527515  836363 kubeadm.go:319] 
	I1210 06:50:15.531196  836363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:50:15.531609  836363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:50:15.531716  836363 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:50:15.531977  836363 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:50:15.531981  836363 kubeadm.go:319] 
	I1210 06:50:15.532049  836363 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:50:15.532106  836363 kubeadm.go:403] duration metric: took 12m8.555678628s to StartCluster
	I1210 06:50:15.532150  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:15.532210  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:15.570548  836363 cri.go:89] found id: ""
	I1210 06:50:15.570562  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.570569  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:50:15.570575  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:50:15.570641  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:15.600057  836363 cri.go:89] found id: ""
	I1210 06:50:15.600071  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.600078  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:50:15.600083  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:50:15.600143  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:15.630207  836363 cri.go:89] found id: ""
	I1210 06:50:15.630221  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.630228  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:50:15.630232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:15.630288  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:15.654767  836363 cri.go:89] found id: ""
	I1210 06:50:15.654781  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.654788  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:50:15.654793  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:15.654853  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:15.678797  836363 cri.go:89] found id: ""
	I1210 06:50:15.678823  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.678830  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:15.678835  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:15.678895  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:15.707130  836363 cri.go:89] found id: ""
	I1210 06:50:15.707144  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.707151  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:50:15.707157  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:15.707215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:15.732682  836363 cri.go:89] found id: ""
	I1210 06:50:15.732696  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.732703  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:15.732711  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:15.732725  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:15.749626  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:15.749643  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:15.820658  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:50:15.811023   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.812026   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.813622   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.814217   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.815852   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:50:15.811023   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.812026   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.813622   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.814217   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.815852   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:15.820670  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:50:15.820682  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:50:15.883000  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:50:15.883021  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:15.913106  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:15.913122  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 06:50:15.972159  836363 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:50:15.972201  836363 out.go:285] * 
	W1210 06:50:15.972316  836363 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:15.972359  836363 out.go:285] * 
	W1210 06:50:15.974510  836363 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:50:15.979994  836363 out.go:203] 
	W1210 06:50:15.983642  836363 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:15.983686  836363 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:50:15.983706  836363 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:50:15.987432  836363 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445107196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445121990Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445162984Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445179287Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445188756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445200998Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445209959Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445223464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445238939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445267518Z" level=info msg="Connect containerd service"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445551476Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.446055950Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466617657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466678671Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466705092Z" level=info msg="Start subscribing containerd event"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466755874Z" level=info msg="Start recovering state"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511858771Z" level=info msg="Start event monitor"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511903539Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511912844Z" level=info msg="Start streaming server"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511923740Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511932676Z" level=info msg="runtime interface starting up..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511939502Z" level=info msg="starting plugins..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511951014Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:38:05 functional-534748 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.523710063Z" level=info msg="containerd successfully booted in 0.098844s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:50:19.375901   21131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:19.376685   21131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:19.378240   21131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:19.378733   21131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:19.380244   21131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:50:19 up  5:32,  0 user,  load average: 0.61, 0.24, 0.46
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:50:16 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:17 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 06:50:17 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:17 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:17 functional-534748 kubelet[20967]: E1210 06:50:17.103850   20967 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:17 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:17 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:17 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 06:50:17 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:17 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:17 functional-534748 kubelet[21007]: E1210 06:50:17.875127   21007 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:17 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:17 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:18 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 10 06:50:18 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:18 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:18 functional-534748 kubelet[21043]: E1210 06:50:18.620096   21043 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:18 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:18 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:19 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 10 06:50:19 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:19 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:19 functional-534748 kubelet[21124]: E1210 06:50:19.367022   21124 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:19 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:19 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (361.779641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-534748 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-534748 apply -f testdata/invalidsvc.yaml: exit status 1 (60.837099ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-534748 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-534748 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-534748 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-534748 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-534748 --alsologtostderr -v=1] stderr:
I1210 06:52:46.063225  853827 out.go:360] Setting OutFile to fd 1 ...
I1210 06:52:46.063408  853827 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:46.063420  853827 out.go:374] Setting ErrFile to fd 2...
I1210 06:52:46.063426  853827 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:46.063689  853827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:52:46.063943  853827 mustload.go:66] Loading cluster: functional-534748
I1210 06:52:46.064375  853827 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:46.064860  853827 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:52:46.082964  853827 host.go:66] Checking if "functional-534748" exists ...
I1210 06:52:46.083341  853827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:52:46.137522  853827 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:46.127364783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:52:46.137637  853827 api_server.go:166] Checking apiserver status ...
I1210 06:52:46.137705  853827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 06:52:46.137751  853827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:52:46.156725  853827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
W1210 06:52:46.255914  853827 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 06:52:46.259089  853827 out.go:179] * The control-plane node functional-534748 apiserver is not running: (state=Stopped)
I1210 06:52:46.262011  853827 out.go:179]   To start a cluster, run: "minikube start -p functional-534748"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (300.407094ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-534748 service hello-node --url --format={{.IP}}                                                                                         │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ service   │ functional-534748 service hello-node --url                                                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount     │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001:/mount-9p --alsologtostderr -v=1              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh       │ functional-534748 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh -- ls -la /mount-9p                                                                                                           │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh cat /mount-9p/test-1765349557041216384                                                                                        │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh       │ functional-534748 ssh sudo umount -f /mount-9p                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount     │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4177155203/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh       │ functional-534748 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh -- ls -la /mount-9p                                                                                                           │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh sudo umount -f /mount-9p                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount     │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount1 --alsologtostderr -v=1                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh       │ functional-534748 ssh findmnt -T /mount1                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount     │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount2 --alsologtostderr -v=1                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount     │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount3 --alsologtostderr -v=1                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh       │ functional-534748 ssh findmnt -T /mount1                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh findmnt -T /mount2                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh findmnt -T /mount3                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ mount     │ -p functional-534748 --kill=true                                                                                                                    │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ start     │ -p functional-534748 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ start     │ -p functional-534748 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ start     │ -p functional-534748 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-534748 --alsologtostderr -v=1                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:52:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:52:45.807653  853756 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:52:45.807764  853756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.807774  853756 out.go:374] Setting ErrFile to fd 2...
	I1210 06:52:45.807779  853756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.808034  853756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:52:45.808382  853756 out.go:368] Setting JSON to false
	I1210 06:52:45.809206  853756 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":20090,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:52:45.809271  853756 start.go:143] virtualization:  
	I1210 06:52:45.812558  853756 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:52:45.815542  853756 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:52:45.815687  853756 notify.go:221] Checking for updates...
	I1210 06:52:45.821390  853756 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:52:45.824195  853756 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:52:45.826987  853756 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:52:45.829774  853756 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:52:45.832618  853756 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:52:45.835931  853756 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:52:45.836500  853756 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:52:45.866568  853756 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:52:45.866757  853756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:52:45.930067  853756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:45.920459297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:52:45.930177  853756 docker.go:319] overlay module found
	I1210 06:52:45.933251  853756 out.go:179] * Using the docker driver based on existing profile
	I1210 06:52:45.936100  853756 start.go:309] selected driver: docker
	I1210 06:52:45.936124  853756 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:52:45.936235  853756 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:52:45.936344  853756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:52:46.003505  853756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:45.991043175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:52:46.003976  853756 cni.go:84] Creating CNI manager for ""
	I1210 06:52:46.004045  853756 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:52:46.004093  853756 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:52:46.007190  853756 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445107196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445121990Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445162984Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445179287Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445188756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445200998Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445209959Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445223464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445238939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445267518Z" level=info msg="Connect containerd service"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445551476Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.446055950Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466617657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466678671Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466705092Z" level=info msg="Start subscribing containerd event"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466755874Z" level=info msg="Start recovering state"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511858771Z" level=info msg="Start event monitor"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511903539Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511912844Z" level=info msg="Start streaming server"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511923740Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511932676Z" level=info msg="runtime interface starting up..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511939502Z" level=info msg="starting plugins..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511951014Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:38:05 functional-534748 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.523710063Z" level=info msg="containerd successfully booted in 0.098844s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:52:47.274416   23378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:47.275202   23378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:47.276781   23378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:47.277080   23378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:47.278733   23378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:52:47 up  5:34,  0 user,  load average: 1.01, 0.42, 0.48
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:52:44 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:44 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 519.
	Dec 10 06:52:44 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:44 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:44 functional-534748 kubelet[23237]: E1210 06:52:44.880834   23237 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:44 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:44 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:45 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 520.
	Dec 10 06:52:45 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:45 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:45 functional-534748 kubelet[23258]: E1210 06:52:45.609132   23258 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:45 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:45 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:46 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 521.
	Dec 10 06:52:46 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:46 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:46 functional-534748 kubelet[23273]: E1210 06:52:46.342807   23273 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:46 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:46 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:47 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 522.
	Dec 10 06:52:47 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:47 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:47 functional-534748 kubelet[23332]: E1210 06:52:47.092495   23332 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:47 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:47 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (314.427512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 status: exit status 2 (333.730086ms)

                                                
                                                
-- stdout --
	functional-534748
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-534748 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (296.647763ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-534748 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 status -o json: exit status 2 (306.470272ms)

                                                
                                                
-- stdout --
	{"Name":"functional-534748","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-534748 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (315.265266ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-534748 addons list -o json                                                                                                               │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ service │ functional-534748 service list                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ service │ functional-534748 service list -o json                                                                                                              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ service │ functional-534748 service --namespace=default --https --url hello-node                                                                              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ service │ functional-534748 service hello-node --url --format={{.IP}}                                                                                         │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ service │ functional-534748 service hello-node --url                                                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount   │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001:/mount-9p --alsologtostderr -v=1              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh     │ functional-534748 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh     │ functional-534748 ssh -- ls -la /mount-9p                                                                                                           │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh     │ functional-534748 ssh cat /mount-9p/test-1765349557041216384                                                                                        │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh     │ functional-534748 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh     │ functional-534748 ssh sudo umount -f /mount-9p                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh     │ functional-534748 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount   │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4177155203/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh     │ functional-534748 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh     │ functional-534748 ssh -- ls -la /mount-9p                                                                                                           │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh     │ functional-534748 ssh sudo umount -f /mount-9p                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount   │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount1 --alsologtostderr -v=1                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh     │ functional-534748 ssh findmnt -T /mount1                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount   │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount2 --alsologtostderr -v=1                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount   │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount3 --alsologtostderr -v=1                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh     │ functional-534748 ssh findmnt -T /mount1                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh     │ functional-534748 ssh findmnt -T /mount2                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh     │ functional-534748 ssh findmnt -T /mount3                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ mount   │ -p functional-534748 --kill=true                                                                                                                    │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:38:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:38:02.996848  836363 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:38:02.996953  836363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:02.996957  836363 out.go:374] Setting ErrFile to fd 2...
	I1210 06:38:02.996961  836363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:02.997226  836363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:38:02.997576  836363 out.go:368] Setting JSON to false
	I1210 06:38:02.998612  836363 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19207,"bootTime":1765329476,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:38:02.998671  836363 start.go:143] virtualization:  
	I1210 06:38:03.004094  836363 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:38:03.007279  836363 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:38:03.007472  836363 notify.go:221] Checking for updates...
	I1210 06:38:03.013532  836363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:38:03.016433  836363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:38:03.019434  836363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:38:03.022270  836363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:38:03.025162  836363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:38:03.028574  836363 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:38:03.028673  836363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:38:03.063427  836363 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:38:03.063527  836363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:03.124292  836363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:38:03.114881143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:03.124387  836363 docker.go:319] overlay module found
	I1210 06:38:03.127603  836363 out.go:179] * Using the docker driver based on existing profile
	I1210 06:38:03.130606  836363 start.go:309] selected driver: docker
	I1210 06:38:03.130616  836363 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:03.130726  836363 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:38:03.130828  836363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:03.183470  836363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:38:03.17400928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:03.183897  836363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:38:03.183921  836363 cni.go:84] Creating CNI manager for ""
	I1210 06:38:03.183969  836363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:03.184018  836363 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:03.188981  836363 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:38:03.191768  836363 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:38:03.194630  836363 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:38:03.197557  836363 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:38:03.197592  836363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:38:03.197600  836363 cache.go:65] Caching tarball of preloaded images
	I1210 06:38:03.197644  836363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:38:03.197695  836363 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:38:03.197704  836363 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:38:03.197812  836363 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:38:03.219374  836363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:38:03.219395  836363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:38:03.219415  836363 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:38:03.219445  836363 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:03.219514  836363 start.go:364] duration metric: took 49.855µs to acquireMachinesLock for "functional-534748"
	I1210 06:38:03.219532  836363 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:38:03.219536  836363 fix.go:54] fixHost starting: 
	I1210 06:38:03.219816  836363 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:38:03.236144  836363 fix.go:112] recreateIfNeeded on functional-534748: state=Running err=<nil>
	W1210 06:38:03.236163  836363 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:38:03.239412  836363 out.go:252] * Updating the running docker "functional-534748" container ...
	I1210 06:38:03.239438  836363 machine.go:94] provisionDockerMachine start ...
	I1210 06:38:03.239539  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.255986  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.256288  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.256294  836363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:38:03.393920  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:38:03.393934  836363 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:38:03.393994  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.411659  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.411963  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.411982  836363 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:38:03.556341  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:38:03.556409  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.574119  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.574414  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.574427  836363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:38:03.711044  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:38:03.711071  836363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:38:03.711104  836363 ubuntu.go:190] setting up certificates
	I1210 06:38:03.711119  836363 provision.go:84] configureAuth start
	I1210 06:38:03.711202  836363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:38:03.730176  836363 provision.go:143] copyHostCerts
	I1210 06:38:03.730250  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:38:03.730257  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:38:03.730338  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:38:03.730431  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:38:03.730435  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:38:03.730459  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:38:03.730669  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:38:03.730673  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:38:03.730699  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:38:03.730787  836363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:38:03.830346  836363 provision.go:177] copyRemoteCerts
	I1210 06:38:03.830399  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:38:03.830448  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.847359  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:03.942214  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:38:03.959615  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:38:03.976341  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:38:03.993197  836363 provision.go:87] duration metric: took 282.055172ms to configureAuth
	I1210 06:38:03.993214  836363 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:38:03.993400  836363 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:38:03.993405  836363 machine.go:97] duration metric: took 753.963524ms to provisionDockerMachine
	I1210 06:38:03.993412  836363 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:38:03.993421  836363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:38:03.993478  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:38:03.993515  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.011825  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.110674  836363 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:38:04.114166  836363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:38:04.114184  836363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:38:04.114196  836363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:38:04.114252  836363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:38:04.114330  836363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:38:04.114407  836363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:38:04.114451  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:38:04.122085  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:38:04.140353  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:38:04.160314  836363 start.go:296] duration metric: took 166.888171ms for postStartSetup
	I1210 06:38:04.160387  836363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:38:04.160439  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.179224  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.271903  836363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:38:04.277112  836363 fix.go:56] duration metric: took 1.057568371s for fixHost
	I1210 06:38:04.277129  836363 start.go:83] releasing machines lock for "functional-534748", held for 1.057608798s
	I1210 06:38:04.277219  836363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:38:04.295104  836363 ssh_runner.go:195] Run: cat /version.json
	I1210 06:38:04.295130  836363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:38:04.295198  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.295203  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.320108  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.320646  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.418978  836363 ssh_runner.go:195] Run: systemctl --version
	I1210 06:38:04.509352  836363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:38:04.513794  836363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:38:04.513869  836363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:38:04.521471  836363 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:38:04.521486  836363 start.go:496] detecting cgroup driver to use...
	I1210 06:38:04.521523  836363 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:38:04.521580  836363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:38:04.537005  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:38:04.550809  836363 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:38:04.550892  836363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:38:04.567139  836363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:38:04.580704  836363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:38:04.697131  836363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:38:04.843057  836363 docker.go:234] disabling docker service ...
	I1210 06:38:04.843134  836363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:38:04.858243  836363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:38:04.871472  836363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:38:04.992555  836363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:38:05.113941  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:38:05.127335  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:38:05.141919  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:38:05.151900  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:38:05.161151  836363 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:38:05.161213  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:38:05.170764  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:05.180471  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:38:05.189238  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:05.197957  836363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:38:05.206107  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:38:05.215515  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:38:05.224555  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:38:05.233326  836363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:38:05.241235  836363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:38:05.248850  836363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:05.372410  836363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:38:05.513843  836363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:38:05.513915  836363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:38:05.519638  836363 start.go:564] Will wait 60s for crictl version
	I1210 06:38:05.519732  836363 ssh_runner.go:195] Run: which crictl
	I1210 06:38:05.524751  836363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:38:05.554788  836363 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:38:05.554852  836363 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:05.575345  836363 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:05.606405  836363 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:38:05.609314  836363 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:38:05.625429  836363 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:38:05.632180  836363 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:38:05.635024  836363 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:38:05.635199  836363 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:38:05.635275  836363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:05.663485  836363 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:38:05.663496  836363 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:38:05.663555  836363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:05.692188  836363 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:38:05.692214  836363 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:38:05.692220  836363 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:38:05.692316  836363 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:38:05.692382  836363 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:38:05.716412  836363 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:38:05.716430  836363 cni.go:84] Creating CNI manager for ""
	I1210 06:38:05.716438  836363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:05.716453  836363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:38:05.716479  836363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:38:05.716586  836363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:38:05.716652  836363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:38:05.724579  836363 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:38:05.724638  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:38:05.732044  836363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:38:05.744806  836363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:38:05.757235  836363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1210 06:38:05.769602  836363 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:38:05.773238  836363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:05.892525  836363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:38:06.296632  836363 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:38:06.296643  836363 certs.go:195] generating shared ca certs ...
	I1210 06:38:06.296658  836363 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:38:06.296809  836363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:38:06.296849  836363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:38:06.296855  836363 certs.go:257] generating profile certs ...
	I1210 06:38:06.296937  836363 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:38:06.297021  836363 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:38:06.297068  836363 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:38:06.297177  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:38:06.297208  836363 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:38:06.297216  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:38:06.297246  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:38:06.297268  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:38:06.297291  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:38:06.297337  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:38:06.297938  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:38:06.317159  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:38:06.336653  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:38:06.357682  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:38:06.376860  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:38:06.394800  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:38:06.412862  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:38:06.430175  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:38:06.447717  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:38:06.465124  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:38:06.482520  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:38:06.500341  836363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:38:06.513157  836363 ssh_runner.go:195] Run: openssl version
	I1210 06:38:06.519293  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.526724  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:38:06.534054  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.537762  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.537817  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.579287  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:38:06.586741  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.593909  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:38:06.601430  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.605107  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.605174  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.646057  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:38:06.653276  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.660757  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:38:06.668784  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.672757  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.672825  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.713985  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:38:06.721257  836363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:38:06.724932  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:38:06.765952  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:38:06.807038  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:38:06.847752  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:38:06.890289  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:38:06.933893  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:38:06.976437  836363 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:06.976545  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:38:06.976606  836363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:07.011412  836363 cri.go:89] found id: ""
	I1210 06:38:07.011470  836363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:38:07.019342  836363 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:38:07.019351  836363 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:38:07.019420  836363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:38:07.026888  836363 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.027424  836363 kubeconfig.go:125] found "functional-534748" server: "https://192.168.49.2:8441"
	I1210 06:38:07.028660  836363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:38:07.037364  836363 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 06:23:31.333930823 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:38:05.762986837 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:38:07.037389  836363 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:38:07.037401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 06:38:07.037465  836363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:07.075015  836363 cri.go:89] found id: ""
	I1210 06:38:07.075109  836363 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:38:07.098429  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:07.106312  836363 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 06:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 06:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 06:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 10 06:27 /etc/kubernetes/scheduler.conf
	
	I1210 06:38:07.106367  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:38:07.114107  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:38:07.122067  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.122121  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:38:07.130176  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:38:07.138001  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.138055  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:07.145554  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:38:07.153390  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.153446  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:07.160768  836363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:07.168493  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:07.213471  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.026655  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.236384  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.298826  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.351741  836363 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:38:08.351821  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:08.852713  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.352205  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.852735  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:10.352309  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:10.851981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:11.352872  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:11.852826  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.352059  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.852894  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:13.352052  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:13.851883  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:14.351956  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:14.852642  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.352606  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.852015  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:16.352784  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:16.852024  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:17.351924  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:17.852941  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.352970  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.852320  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:19.352100  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:19.852911  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:20.352224  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:20.852520  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.352048  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.851954  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:22.352639  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:22.852718  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:23.352574  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:23.851998  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.352693  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.851979  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:25.352948  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:25.852529  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:26.351982  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:26.852421  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.352059  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.851955  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:28.351909  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:28.852783  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.352790  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.852562  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:30.352816  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:30.852170  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:31.352863  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:31.852962  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:32.351970  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:32.852604  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.352940  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.852377  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:34.352015  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:34.852768  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:35.352496  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:35.852012  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.351968  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.852867  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:37.351948  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:37.852026  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.351985  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.852728  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:39.351971  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:39.852981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:40.352705  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:40.852754  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:41.352353  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:41.852845  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.352945  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.852200  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.352581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.851999  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.352537  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.852152  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.352051  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.852697  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.352700  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.852741  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.351895  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.852042  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.352023  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.852686  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.352818  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.852006  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.354621  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.852814  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.352669  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.852320  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.352933  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.852726  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.352653  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.852593  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.352710  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.852520  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.352259  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.851929  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.352781  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.852568  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.352484  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.852171  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.352010  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.852803  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.352685  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.852017  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.353581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.852809  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.352585  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.852755  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.351981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.851998  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.352045  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.851906  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.352316  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.852592  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.351976  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.852799  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.351972  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.852965  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.351946  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.852642  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:08.352868  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:08.352944  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:08.381205  836363 cri.go:89] found id: ""
	I1210 06:39:08.381219  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.381227  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:08.381232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:08.381288  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:08.404633  836363 cri.go:89] found id: ""
	I1210 06:39:08.404646  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.404654  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:08.404659  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:08.404721  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:08.428513  836363 cri.go:89] found id: ""
	I1210 06:39:08.428527  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.428534  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:08.428546  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:08.428606  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:08.453023  836363 cri.go:89] found id: ""
	I1210 06:39:08.453036  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.453043  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:08.453049  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:08.453105  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:08.481527  836363 cri.go:89] found id: ""
	I1210 06:39:08.481540  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.481547  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:08.481552  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:08.481609  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:08.506550  836363 cri.go:89] found id: ""
	I1210 06:39:08.506565  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.506580  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:08.506585  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:08.506649  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:08.531724  836363 cri.go:89] found id: ""
	I1210 06:39:08.531738  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.531745  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:08.531752  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:08.531763  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:08.571815  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:08.571832  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:08.630094  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:08.630112  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:08.647317  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:08.647335  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:08.715592  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:08.706872   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.707541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709199   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709733   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.711367   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:08.706872   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.707541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709199   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709733   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.711367   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:08.715603  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:08.715614  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:11.280652  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:11.290422  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:11.290516  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:11.314331  836363 cri.go:89] found id: ""
	I1210 06:39:11.314345  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.314352  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:11.314357  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:11.314419  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:11.337726  836363 cri.go:89] found id: ""
	I1210 06:39:11.337741  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.337747  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:11.337752  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:11.337812  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:11.365800  836363 cri.go:89] found id: ""
	I1210 06:39:11.365815  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.365821  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:11.365826  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:11.365886  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:11.394804  836363 cri.go:89] found id: ""
	I1210 06:39:11.394818  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.394825  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:11.394830  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:11.394887  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:11.419726  836363 cri.go:89] found id: ""
	I1210 06:39:11.419740  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.419746  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:11.419751  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:11.419810  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:11.445533  836363 cri.go:89] found id: ""
	I1210 06:39:11.445547  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.445554  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:11.445560  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:11.445618  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:11.470212  836363 cri.go:89] found id: ""
	I1210 06:39:11.470227  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.470233  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:11.470241  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:11.470251  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:11.529183  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:11.529202  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:11.546384  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:11.546400  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:11.640312  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:11.631803   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.632307   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.633874   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635142   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635867   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:11.631803   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.632307   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.633874   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635142   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635867   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:11.640322  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:11.640333  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:11.703828  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:11.703850  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:14.230665  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:14.241121  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:14.241183  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:14.268951  836363 cri.go:89] found id: ""
	I1210 06:39:14.268964  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.268974  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:14.268979  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:14.269035  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:14.292742  836363 cri.go:89] found id: ""
	I1210 06:39:14.292761  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.292768  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:14.292773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:14.292838  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:14.317527  836363 cri.go:89] found id: ""
	I1210 06:39:14.317540  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.317547  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:14.317552  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:14.317609  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:14.344738  836363 cri.go:89] found id: ""
	I1210 06:39:14.344751  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.344758  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:14.344764  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:14.344822  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:14.369086  836363 cri.go:89] found id: ""
	I1210 06:39:14.369101  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.369108  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:14.369114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:14.369172  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:14.393919  836363 cri.go:89] found id: ""
	I1210 06:39:14.393932  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.393938  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:14.393943  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:14.394005  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:14.418228  836363 cri.go:89] found id: ""
	I1210 06:39:14.418242  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.418249  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:14.418257  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:14.418267  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:14.481544  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:14.481564  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:14.509051  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:14.509072  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:14.574238  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:14.574259  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:14.594306  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:14.594323  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:14.659264  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:14.651062   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.651501   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653284   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653744   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.655187   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:14.651062   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.651501   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653284   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653744   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.655187   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:17.159960  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:17.169978  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:17.170036  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:17.194333  836363 cri.go:89] found id: ""
	I1210 06:39:17.194347  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.194354  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:17.194359  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:17.194418  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:17.218507  836363 cri.go:89] found id: ""
	I1210 06:39:17.218521  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.218528  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:17.218533  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:17.218617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:17.243499  836363 cri.go:89] found id: ""
	I1210 06:39:17.243513  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.243521  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:17.243527  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:17.243585  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:17.271019  836363 cri.go:89] found id: ""
	I1210 06:39:17.271034  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.271041  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:17.271048  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:17.271106  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:17.296491  836363 cri.go:89] found id: ""
	I1210 06:39:17.296506  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.296513  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:17.296517  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:17.296574  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:17.327127  836363 cri.go:89] found id: ""
	I1210 06:39:17.327142  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.327149  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:17.327156  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:17.327214  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:17.351001  836363 cri.go:89] found id: ""
	I1210 06:39:17.351016  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.351023  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:17.351031  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:17.351046  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:17.408952  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:17.408971  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:17.425660  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:17.425676  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:17.495167  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:17.486424   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.487213   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.488883   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.489501   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.491179   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:17.486424   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.487213   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.488883   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.489501   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.491179   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:17.495179  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:17.495190  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:17.562848  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:17.562868  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:20.100845  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:20.111238  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:20.111303  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:20.135715  836363 cri.go:89] found id: ""
	I1210 06:39:20.135730  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.135737  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:20.135742  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:20.135849  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:20.162728  836363 cri.go:89] found id: ""
	I1210 06:39:20.162742  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.162750  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:20.162754  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:20.162817  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:20.186896  836363 cri.go:89] found id: ""
	I1210 06:39:20.186910  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.186918  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:20.186923  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:20.187033  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:20.211401  836363 cri.go:89] found id: ""
	I1210 06:39:20.211416  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.211423  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:20.211428  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:20.211494  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:20.241049  836363 cri.go:89] found id: ""
	I1210 06:39:20.241063  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.241071  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:20.241075  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:20.241136  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:20.264812  836363 cri.go:89] found id: ""
	I1210 06:39:20.264826  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.264833  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:20.264839  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:20.264905  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:20.289153  836363 cri.go:89] found id: ""
	I1210 06:39:20.289167  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.289179  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:20.289187  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:20.289198  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:20.305825  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:20.305841  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:20.372702  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:20.364207   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.364892   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.366572   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.367140   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.368841   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:20.364207   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.364892   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.366572   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.367140   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.368841   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:20.372716  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:20.372727  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:20.434137  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:20.434156  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:20.462784  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:20.462801  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:23.020338  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:23.033250  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:23.033312  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:23.057227  836363 cri.go:89] found id: ""
	I1210 06:39:23.057241  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.057247  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:23.057252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:23.057310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:23.082261  836363 cri.go:89] found id: ""
	I1210 06:39:23.082275  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.082282  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:23.082287  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:23.082346  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:23.106424  836363 cri.go:89] found id: ""
	I1210 06:39:23.106438  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.106445  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:23.106451  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:23.106554  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:23.132399  836363 cri.go:89] found id: ""
	I1210 06:39:23.132414  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.132429  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:23.132435  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:23.132492  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:23.162454  836363 cri.go:89] found id: ""
	I1210 06:39:23.162494  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.162501  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:23.162507  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:23.162581  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:23.187219  836363 cri.go:89] found id: ""
	I1210 06:39:23.187233  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.187240  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:23.187245  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:23.187310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:23.212781  836363 cri.go:89] found id: ""
	I1210 06:39:23.212795  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.212802  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:23.212809  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:23.212821  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:23.269301  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:23.269321  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:23.286019  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:23.286034  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:23.349588  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:23.342068   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.342600   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344048   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344478   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.345899   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:23.342068   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.342600   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344048   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344478   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.345899   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:23.349598  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:23.349608  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:23.410637  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:23.410657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:25.946659  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:25.956427  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:25.956484  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:25.980198  836363 cri.go:89] found id: ""
	I1210 06:39:25.980212  836363 logs.go:282] 0 containers: []
	W1210 06:39:25.980219  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:25.980224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:25.980282  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:26.007385  836363 cri.go:89] found id: ""
	I1210 06:39:26.007400  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.007408  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:26.007413  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:26.007504  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:26.036729  836363 cri.go:89] found id: ""
	I1210 06:39:26.036743  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.036750  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:26.036755  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:26.036816  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:26.062224  836363 cri.go:89] found id: ""
	I1210 06:39:26.062238  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.062245  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:26.062250  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:26.062310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:26.087647  836363 cri.go:89] found id: ""
	I1210 06:39:26.087661  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.087668  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:26.087682  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:26.087742  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:26.111730  836363 cri.go:89] found id: ""
	I1210 06:39:26.111744  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.111751  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:26.111756  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:26.111815  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:26.140490  836363 cri.go:89] found id: ""
	I1210 06:39:26.140504  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.140511  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:26.140525  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:26.140534  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:26.196200  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:26.196219  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:26.212571  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:26.212587  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:26.273577  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:26.265176   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.265699   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267151   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267679   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.269363   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:26.265176   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.265699   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267151   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267679   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.269363   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:26.273590  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:26.273603  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:26.335078  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:26.335098  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:28.869553  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:28.880899  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:28.880964  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:28.906428  836363 cri.go:89] found id: ""
	I1210 06:39:28.906442  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.906449  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:28.906454  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:28.906544  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:28.931886  836363 cri.go:89] found id: ""
	I1210 06:39:28.931900  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.931908  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:28.931912  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:28.931973  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:28.961315  836363 cri.go:89] found id: ""
	I1210 06:39:28.961329  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.961336  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:28.961340  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:28.961401  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:28.986397  836363 cri.go:89] found id: ""
	I1210 06:39:28.986411  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.986419  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:28.986425  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:28.986507  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:29.012532  836363 cri.go:89] found id: ""
	I1210 06:39:29.012546  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.012554  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:29.012559  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:29.012617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:29.041722  836363 cri.go:89] found id: ""
	I1210 06:39:29.041736  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.041744  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:29.041749  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:29.041810  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:29.067638  836363 cri.go:89] found id: ""
	I1210 06:39:29.067652  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.067660  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:29.067675  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:29.067686  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:29.123932  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:29.123951  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:29.140346  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:29.140363  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:29.205033  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:29.196885   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.197511   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199079   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199683   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.201215   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:29.196885   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.197511   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199079   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199683   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.201215   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:29.205044  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:29.205056  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:29.268564  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:29.268592  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:31.797415  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:31.810439  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:31.810560  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:31.839718  836363 cri.go:89] found id: ""
	I1210 06:39:31.839731  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.839738  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:31.839743  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:31.839812  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:31.866887  836363 cri.go:89] found id: ""
	I1210 06:39:31.866901  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.866908  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:31.866913  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:31.866971  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:31.896088  836363 cri.go:89] found id: ""
	I1210 06:39:31.896102  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.896109  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:31.896114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:31.896183  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:31.920769  836363 cri.go:89] found id: ""
	I1210 06:39:31.920783  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.920790  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:31.920804  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:31.920870  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:31.944941  836363 cri.go:89] found id: ""
	I1210 06:39:31.944955  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.944973  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:31.944979  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:31.945062  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:31.969699  836363 cri.go:89] found id: ""
	I1210 06:39:31.969713  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.969719  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:31.969734  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:31.969796  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:31.994263  836363 cri.go:89] found id: ""
	I1210 06:39:31.994288  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.994296  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:31.994305  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:31.994315  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:32.051337  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:32.051358  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:32.068506  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:32.068524  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:32.133010  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:32.124121   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.124862   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.126702   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.127174   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.128721   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:32.124121   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.124862   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.126702   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.127174   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.128721   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:32.133022  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:32.133032  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:32.195411  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:32.195432  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:34.725830  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:34.736154  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:34.736227  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:34.760592  836363 cri.go:89] found id: ""
	I1210 06:39:34.760606  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.760613  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:34.760618  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:34.760679  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:34.789194  836363 cri.go:89] found id: ""
	I1210 06:39:34.789208  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.789215  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:34.789220  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:34.789290  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:34.821768  836363 cri.go:89] found id: ""
	I1210 06:39:34.821783  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.821798  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:34.821804  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:34.821862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:34.851156  836363 cri.go:89] found id: ""
	I1210 06:39:34.851182  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.851190  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:34.851195  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:34.851262  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:34.881339  836363 cri.go:89] found id: ""
	I1210 06:39:34.881353  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.881361  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:34.881366  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:34.881439  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:34.906857  836363 cri.go:89] found id: ""
	I1210 06:39:34.906871  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.906878  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:34.906884  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:34.906950  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:34.935793  836363 cri.go:89] found id: ""
	I1210 06:39:34.935807  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.935814  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:34.935822  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:34.935832  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:34.993322  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:34.993345  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:35.011292  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:35.011309  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:35.078043  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:35.069050   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070041   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070728   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.072495   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.073080   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:35.069050   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070041   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070728   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.072495   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.073080   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:35.078052  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:35.078063  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:35.146644  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:35.146671  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:37.678658  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:37.688848  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:37.688925  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:37.713621  836363 cri.go:89] found id: ""
	I1210 06:39:37.713635  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.713642  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:37.713647  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:37.713706  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:37.738638  836363 cri.go:89] found id: ""
	I1210 06:39:37.738651  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.738658  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:37.738663  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:37.738728  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:37.767364  836363 cri.go:89] found id: ""
	I1210 06:39:37.767378  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.767385  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:37.767390  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:37.767446  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:37.804827  836363 cri.go:89] found id: ""
	I1210 06:39:37.804841  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.804848  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:37.804854  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:37.804911  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:37.830424  836363 cri.go:89] found id: ""
	I1210 06:39:37.830438  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.830445  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:37.830449  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:37.830529  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:37.862851  836363 cri.go:89] found id: ""
	I1210 06:39:37.862864  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.862871  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:37.862876  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:37.862933  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:37.887629  836363 cri.go:89] found id: ""
	I1210 06:39:37.887643  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.887650  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:37.887686  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:37.887698  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:37.946033  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:37.946053  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:37.962951  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:37.962969  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:38.030263  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:38.021061   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.021797   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.022740   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.024684   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.025056   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:38.021061   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.021797   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.022740   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.024684   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.025056   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:38.030274  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:38.030285  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:38.093462  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:38.093482  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:40.622687  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:40.632840  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:40.632902  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:40.657235  836363 cri.go:89] found id: ""
	I1210 06:39:40.657248  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.657255  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:40.657261  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:40.657320  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:40.681835  836363 cri.go:89] found id: ""
	I1210 06:39:40.681849  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.681857  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:40.681862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:40.681919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:40.708085  836363 cri.go:89] found id: ""
	I1210 06:39:40.708099  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.708106  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:40.708111  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:40.708172  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:40.734852  836363 cri.go:89] found id: ""
	I1210 06:39:40.734867  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.734874  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:40.734879  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:40.734937  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:40.760765  836363 cri.go:89] found id: ""
	I1210 06:39:40.760779  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.760786  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:40.760791  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:40.760862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:40.785777  836363 cri.go:89] found id: ""
	I1210 06:39:40.785791  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.785797  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:40.785802  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:40.785862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:40.812943  836363 cri.go:89] found id: ""
	I1210 06:39:40.812957  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.812963  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:40.812971  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:40.812981  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:40.882713  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:40.874213   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.874907   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876393   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876781   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.878311   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:40.874213   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.874907   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876393   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876781   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.878311   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:40.882724  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:40.882746  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:40.946502  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:40.946522  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:40.973695  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:40.973711  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:41.028086  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:41.028105  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:43.544743  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:43.554582  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:43.554639  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:43.578394  836363 cri.go:89] found id: ""
	I1210 06:39:43.578408  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.578415  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:43.578421  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:43.578501  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:43.602120  836363 cri.go:89] found id: ""
	I1210 06:39:43.602134  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.602141  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:43.602152  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:43.602211  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:43.626641  836363 cri.go:89] found id: ""
	I1210 06:39:43.626655  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.626662  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:43.626666  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:43.626730  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:43.650792  836363 cri.go:89] found id: ""
	I1210 06:39:43.650805  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.650812  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:43.650817  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:43.650875  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:43.676181  836363 cri.go:89] found id: ""
	I1210 06:39:43.676195  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.676201  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:43.676207  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:43.676264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:43.700288  836363 cri.go:89] found id: ""
	I1210 06:39:43.700301  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.700308  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:43.700317  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:43.700376  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:43.723140  836363 cri.go:89] found id: ""
	I1210 06:39:43.723154  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.723161  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:43.723169  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:43.723179  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:43.777323  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:43.777344  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:43.793764  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:43.793781  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:43.876520  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:43.868105   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.868820   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870334   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870859   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.872328   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:43.868105   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.868820   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870334   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870859   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.872328   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:43.876531  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:43.876546  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:43.937962  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:43.937982  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:46.471232  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:46.481349  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:46.481414  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:46.505604  836363 cri.go:89] found id: ""
	I1210 06:39:46.505618  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.505625  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:46.505631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:46.505693  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:46.530584  836363 cri.go:89] found id: ""
	I1210 06:39:46.530598  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.530605  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:46.530610  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:46.530667  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:46.555675  836363 cri.go:89] found id: ""
	I1210 06:39:46.555689  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.555696  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:46.555701  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:46.555758  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:46.579225  836363 cri.go:89] found id: ""
	I1210 06:39:46.579240  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.579246  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:46.579252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:46.579309  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:46.603318  836363 cri.go:89] found id: ""
	I1210 06:39:46.603332  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.603339  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:46.603344  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:46.603400  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:46.628198  836363 cri.go:89] found id: ""
	I1210 06:39:46.628212  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.628219  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:46.628224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:46.628280  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:46.651425  836363 cri.go:89] found id: ""
	I1210 06:39:46.651439  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.651446  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:46.651454  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:46.651464  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:46.706345  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:46.706364  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:46.722718  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:46.722733  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:46.788441  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:46.780334   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.780989   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.782563   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.783115   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.784714   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:46.780334   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.780989   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.782563   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.783115   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.784714   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:46.788461  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:46.788474  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:46.856250  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:46.856269  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:49.385907  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:49.395772  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:49.395833  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:49.419273  836363 cri.go:89] found id: ""
	I1210 06:39:49.419286  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.419294  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:49.419299  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:49.419357  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:49.444546  836363 cri.go:89] found id: ""
	I1210 06:39:49.444560  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.444567  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:49.444572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:49.444634  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:49.469099  836363 cri.go:89] found id: ""
	I1210 06:39:49.469113  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.469120  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:49.469125  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:49.469182  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:49.497447  836363 cri.go:89] found id: ""
	I1210 06:39:49.497461  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.497468  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:49.497473  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:49.497531  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:49.521614  836363 cri.go:89] found id: ""
	I1210 06:39:49.521628  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.521635  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:49.521640  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:49.521700  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:49.546324  836363 cri.go:89] found id: ""
	I1210 06:39:49.546338  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.546345  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:49.546351  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:49.546408  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:49.569503  836363 cri.go:89] found id: ""
	I1210 06:39:49.569516  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.569523  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:49.569531  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:49.569541  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:49.625182  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:49.625201  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:49.641754  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:49.641772  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:49.705447  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:49.697491   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.698134   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.699780   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.700234   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.701724   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:49.697491   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.698134   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.699780   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.700234   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.701724   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:49.705457  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:49.705478  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:49.766615  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:49.766634  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:52.302628  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:52.312769  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:52.312832  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:52.338228  836363 cri.go:89] found id: ""
	I1210 06:39:52.338242  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.338249  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:52.338254  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:52.338315  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:52.363997  836363 cri.go:89] found id: ""
	I1210 06:39:52.364011  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.364018  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:52.364024  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:52.364083  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:52.389867  836363 cri.go:89] found id: ""
	I1210 06:39:52.389881  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.389888  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:52.389894  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:52.389959  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:52.416171  836363 cri.go:89] found id: ""
	I1210 06:39:52.416186  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.416193  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:52.416199  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:52.416262  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:52.440036  836363 cri.go:89] found id: ""
	I1210 06:39:52.440051  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.440058  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:52.440064  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:52.440127  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:52.465173  836363 cri.go:89] found id: ""
	I1210 06:39:52.465188  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.465195  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:52.465200  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:52.465266  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:52.490275  836363 cri.go:89] found id: ""
	I1210 06:39:52.490289  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.490296  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:52.490304  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:52.490316  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:52.507524  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:52.507541  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:52.572947  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:52.565302   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.565716   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567214   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567524   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.569003   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:52.565302   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.565716   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567214   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567524   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.569003   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:52.572957  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:52.572967  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:52.639898  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:52.639920  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:52.671836  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:52.671853  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:55.228555  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:55.238632  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:55.238692  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:55.262819  836363 cri.go:89] found id: ""
	I1210 06:39:55.262833  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.262840  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:55.262845  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:55.262903  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:55.287262  836363 cri.go:89] found id: ""
	I1210 06:39:55.287276  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.287282  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:55.287287  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:55.287347  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:55.312064  836363 cri.go:89] found id: ""
	I1210 06:39:55.312077  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.312084  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:55.312089  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:55.312147  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:55.340546  836363 cri.go:89] found id: ""
	I1210 06:39:55.340560  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.340566  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:55.340572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:55.340638  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:55.369203  836363 cri.go:89] found id: ""
	I1210 06:39:55.369217  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.369224  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:55.369229  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:55.369294  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:55.394186  836363 cri.go:89] found id: ""
	I1210 06:39:55.394200  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.394213  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:55.394218  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:55.394275  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:55.418250  836363 cri.go:89] found id: ""
	I1210 06:39:55.418264  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.418271  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:55.418279  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:55.418293  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:55.449481  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:55.449497  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:55.505651  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:55.505670  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:55.522722  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:55.522739  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:55.595372  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:55.580192   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.580773   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.588978   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.589842   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.591512   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:55.580192   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.580773   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.588978   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.589842   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.591512   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:55.595383  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:55.595396  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:58.156956  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:58.167095  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:58.167157  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:58.191075  836363 cri.go:89] found id: ""
	I1210 06:39:58.191089  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.191096  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:58.191101  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:58.191161  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:58.219145  836363 cri.go:89] found id: ""
	I1210 06:39:58.219159  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.219166  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:58.219171  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:58.219230  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:58.243820  836363 cri.go:89] found id: ""
	I1210 06:39:58.243834  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.243841  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:58.243846  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:58.243903  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:58.273220  836363 cri.go:89] found id: ""
	I1210 06:39:58.273234  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.273241  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:58.273246  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:58.273306  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:58.296744  836363 cri.go:89] found id: ""
	I1210 06:39:58.296758  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.296765  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:58.296770  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:58.296826  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:58.321374  836363 cri.go:89] found id: ""
	I1210 06:39:58.321389  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.321395  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:58.321401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:58.321460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:58.345587  836363 cri.go:89] found id: ""
	I1210 06:39:58.345601  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.345607  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:58.345615  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:58.345626  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:58.363238  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:58.363255  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:58.430409  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:58.422109   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.422784   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.424524   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.425019   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.426627   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:58.422109   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.422784   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.424524   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.425019   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.426627   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:58.430420  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:58.430439  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:58.492984  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:58.493002  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:58.520139  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:58.520155  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:01.076701  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:01.088176  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:01.088237  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:01.115625  836363 cri.go:89] found id: ""
	I1210 06:40:01.115641  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.115648  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:01.115653  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:01.115713  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:01.142756  836363 cri.go:89] found id: ""
	I1210 06:40:01.142771  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.142779  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:01.142784  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:01.142854  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:01.174021  836363 cri.go:89] found id: ""
	I1210 06:40:01.174036  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.174043  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:01.174048  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:01.174115  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:01.200639  836363 cri.go:89] found id: ""
	I1210 06:40:01.200654  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.200661  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:01.200667  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:01.200729  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:01.225759  836363 cri.go:89] found id: ""
	I1210 06:40:01.225772  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.225779  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:01.225785  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:01.225851  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:01.250911  836363 cri.go:89] found id: ""
	I1210 06:40:01.250926  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.250934  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:01.250940  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:01.251003  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:01.279325  836363 cri.go:89] found id: ""
	I1210 06:40:01.279339  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.279347  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:01.279355  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:01.279366  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:01.335352  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:01.335371  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:01.352578  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:01.352596  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:01.422752  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:01.414308   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.415520   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417210   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417554   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.418810   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:01.414308   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.415520   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417210   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417554   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.418810   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:01.422763  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:01.422778  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:01.484637  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:01.484658  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:04.016723  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:04.027134  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:04.027199  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:04.058110  836363 cri.go:89] found id: ""
	I1210 06:40:04.058123  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.058131  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:04.058136  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:04.058194  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:04.085839  836363 cri.go:89] found id: ""
	I1210 06:40:04.085853  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.085859  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:04.085874  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:04.085938  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:04.112846  836363 cri.go:89] found id: ""
	I1210 06:40:04.112870  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.112877  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:04.112884  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:04.112952  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:04.144605  836363 cri.go:89] found id: ""
	I1210 06:40:04.144619  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.144626  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:04.144631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:04.144698  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:04.170078  836363 cri.go:89] found id: ""
	I1210 06:40:04.170093  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.170111  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:04.170116  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:04.170187  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:04.195493  836363 cri.go:89] found id: ""
	I1210 06:40:04.195560  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.195568  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:04.195573  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:04.195663  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:04.224488  836363 cri.go:89] found id: ""
	I1210 06:40:04.224502  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.224509  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:04.224518  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:04.224528  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:04.280631  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:04.280651  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:04.297645  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:04.297663  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:04.366830  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:04.356738   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.357139   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.358834   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.359561   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.361366   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:04.356738   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.357139   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.358834   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.359561   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.361366   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:04.366842  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:04.366854  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:04.430241  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:04.430260  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:06.963156  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:06.973415  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:06.973480  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:06.997210  836363 cri.go:89] found id: ""
	I1210 06:40:06.997223  836363 logs.go:282] 0 containers: []
	W1210 06:40:06.997230  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:06.997235  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:06.997292  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:07.024360  836363 cri.go:89] found id: ""
	I1210 06:40:07.024374  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.024381  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:07.024386  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:07.024443  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:07.056844  836363 cri.go:89] found id: ""
	I1210 06:40:07.056857  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.056864  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:07.056869  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:07.056926  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:07.095983  836363 cri.go:89] found id: ""
	I1210 06:40:07.095997  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.096004  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:07.096010  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:07.096080  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:07.126932  836363 cri.go:89] found id: ""
	I1210 06:40:07.126947  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.126954  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:07.126958  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:07.127020  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:07.151807  836363 cri.go:89] found id: ""
	I1210 06:40:07.151823  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.151831  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:07.151835  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:07.151895  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:07.175459  836363 cri.go:89] found id: ""
	I1210 06:40:07.175473  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.175480  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:07.175489  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:07.175499  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:07.229963  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:07.229984  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:07.249632  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:07.249654  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:07.314011  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:07.306140   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.306891   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308497   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308812   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.310262   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:07.306140   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.306891   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308497   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308812   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.310262   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:07.314022  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:07.314034  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:07.376148  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:07.376173  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:09.907917  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:09.918267  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:09.918339  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:09.946634  836363 cri.go:89] found id: ""
	I1210 06:40:09.946648  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.946654  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:09.946660  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:09.946729  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:09.971532  836363 cri.go:89] found id: ""
	I1210 06:40:09.971546  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.971553  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:09.971558  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:09.971633  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:09.995748  836363 cri.go:89] found id: ""
	I1210 06:40:09.995762  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.995768  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:09.995773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:09.995832  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:10.026807  836363 cri.go:89] found id: ""
	I1210 06:40:10.026821  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.026828  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:10.026834  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:10.026902  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:10.060800  836363 cri.go:89] found id: ""
	I1210 06:40:10.060815  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.060822  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:10.060831  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:10.060896  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:10.092175  836363 cri.go:89] found id: ""
	I1210 06:40:10.092190  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.092200  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:10.092205  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:10.092267  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:10.121165  836363 cri.go:89] found id: ""
	I1210 06:40:10.121179  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.121187  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:10.121197  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:10.121208  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:10.137742  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:10.137761  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:10.202959  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:10.194457   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.195204   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.196954   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.197466   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.199061   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:10.194457   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.195204   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.196954   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.197466   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.199061   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:10.202970  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:10.202993  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:10.263838  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:10.263860  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:10.290431  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:10.290450  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:12.845609  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:12.856045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:12.856108  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:12.881725  836363 cri.go:89] found id: ""
	I1210 06:40:12.881740  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.881756  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:12.881762  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:12.881836  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:12.905554  836363 cri.go:89] found id: ""
	I1210 06:40:12.905568  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.905575  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:12.905580  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:12.905636  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:12.929343  836363 cri.go:89] found id: ""
	I1210 06:40:12.929357  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.929363  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:12.929369  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:12.929427  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:12.958063  836363 cri.go:89] found id: ""
	I1210 06:40:12.958077  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.958083  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:12.958089  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:12.958153  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:12.982226  836363 cri.go:89] found id: ""
	I1210 06:40:12.982240  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.982247  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:12.982252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:12.982309  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:13.008275  836363 cri.go:89] found id: ""
	I1210 06:40:13.008296  836363 logs.go:282] 0 containers: []
	W1210 06:40:13.008304  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:13.008309  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:13.008376  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:13.032141  836363 cri.go:89] found id: ""
	I1210 06:40:13.032155  836363 logs.go:282] 0 containers: []
	W1210 06:40:13.032161  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:13.032169  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:13.032180  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:13.094529  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:13.094550  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:13.112774  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:13.112794  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:13.177133  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:13.169073   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.169669   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171355   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171773   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.173232   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:13.169073   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.169669   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171355   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171773   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.173232   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:13.177142  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:13.177157  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:13.237784  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:13.237804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:15.773100  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:15.783808  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:15.783870  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:15.808779  836363 cri.go:89] found id: ""
	I1210 06:40:15.808792  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.808799  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:15.808811  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:15.808873  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:15.835122  836363 cri.go:89] found id: ""
	I1210 06:40:15.835136  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.835143  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:15.835147  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:15.835205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:15.859608  836363 cri.go:89] found id: ""
	I1210 06:40:15.859622  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.859630  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:15.859635  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:15.859698  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:15.884617  836363 cri.go:89] found id: ""
	I1210 06:40:15.884631  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.884637  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:15.884648  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:15.884708  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:15.917645  836363 cri.go:89] found id: ""
	I1210 06:40:15.917659  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.917666  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:15.917671  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:15.917738  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:15.942216  836363 cri.go:89] found id: ""
	I1210 06:40:15.942230  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.942237  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:15.942246  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:15.942306  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:15.969023  836363 cri.go:89] found id: ""
	I1210 06:40:15.969038  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.969045  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:15.969053  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:15.969065  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:16.025303  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:16.025322  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:16.043036  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:16.043055  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:16.124792  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:16.116191   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.116732   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118445   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118910   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.120594   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:16.116191   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.116732   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118445   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118910   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.120594   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:16.124803  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:16.124829  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:16.187018  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:16.187038  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:18.721268  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:18.732117  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:18.732179  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:18.759703  836363 cri.go:89] found id: ""
	I1210 06:40:18.759717  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.759724  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:18.759729  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:18.759803  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:18.785469  836363 cri.go:89] found id: ""
	I1210 06:40:18.785482  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.785492  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:18.785497  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:18.785556  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:18.809013  836363 cri.go:89] found id: ""
	I1210 06:40:18.809026  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.809033  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:18.809038  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:18.809100  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:18.837693  836363 cri.go:89] found id: ""
	I1210 06:40:18.837707  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.837714  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:18.837719  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:18.837777  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:18.862280  836363 cri.go:89] found id: ""
	I1210 06:40:18.862294  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.862300  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:18.862306  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:18.862366  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:18.887552  836363 cri.go:89] found id: ""
	I1210 06:40:18.887566  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.887573  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:18.887578  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:18.887644  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:18.912972  836363 cri.go:89] found id: ""
	I1210 06:40:18.912987  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.912994  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:18.913002  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:18.913020  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:18.968777  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:18.968818  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:18.987249  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:18.987267  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:19.053510  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:19.044510   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.045135   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047145   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047494   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.049061   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:19.044510   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.045135   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047145   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047494   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.049061   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:19.053536  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:19.053548  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:19.127699  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:19.127719  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:21.655771  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:21.665930  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:21.665996  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:21.690403  836363 cri.go:89] found id: ""
	I1210 06:40:21.690417  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.690424  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:21.690429  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:21.690526  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:21.716021  836363 cri.go:89] found id: ""
	I1210 06:40:21.716035  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.716042  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:21.716047  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:21.716110  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:21.740524  836363 cri.go:89] found id: ""
	I1210 06:40:21.740538  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.740545  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:21.740551  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:21.740610  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:21.764686  836363 cri.go:89] found id: ""
	I1210 06:40:21.764699  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.764706  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:21.764711  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:21.764768  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:21.789476  836363 cri.go:89] found id: ""
	I1210 06:40:21.789490  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.789497  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:21.789502  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:21.789567  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:21.815957  836363 cri.go:89] found id: ""
	I1210 06:40:21.815973  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.815981  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:21.815986  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:21.816046  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:21.844568  836363 cri.go:89] found id: ""
	I1210 06:40:21.844582  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.844589  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:21.844597  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:21.844607  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:21.900940  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:21.900960  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:21.919059  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:21.919078  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:21.988088  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:21.979038   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.979792   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981425   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981947   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.983526   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:21.979038   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.979792   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981425   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981947   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.983526   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:21.988098  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:21.988109  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:22.051814  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:22.051834  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:24.585034  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:24.595723  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:24.595789  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:24.624873  836363 cri.go:89] found id: ""
	I1210 06:40:24.624888  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.624895  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:24.624900  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:24.624966  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:24.649543  836363 cri.go:89] found id: ""
	I1210 06:40:24.649557  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.649564  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:24.649570  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:24.649680  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:24.675056  836363 cri.go:89] found id: ""
	I1210 06:40:24.675080  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.675088  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:24.675093  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:24.675154  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:24.700453  836363 cri.go:89] found id: ""
	I1210 06:40:24.700466  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.700474  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:24.700479  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:24.700537  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:24.726867  836363 cri.go:89] found id: ""
	I1210 06:40:24.726881  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.726887  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:24.726893  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:24.726955  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:24.751980  836363 cri.go:89] found id: ""
	I1210 06:40:24.751994  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.752002  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:24.752007  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:24.752068  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:24.782328  836363 cri.go:89] found id: ""
	I1210 06:40:24.782342  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.782349  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:24.782357  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:24.782367  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:24.845411  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:24.845431  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:24.874554  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:24.874571  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:24.930797  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:24.930817  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:24.947891  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:24.947910  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:25.021562  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:25.011415   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.012701   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.013093   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.014987   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.015492   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:25.011415   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.012701   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.013093   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.014987   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.015492   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:27.522215  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:27.533345  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:27.533449  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:27.562516  836363 cri.go:89] found id: ""
	I1210 06:40:27.562529  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.562538  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:27.562543  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:27.562612  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:27.589053  836363 cri.go:89] found id: ""
	I1210 06:40:27.589081  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.589089  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:27.589098  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:27.589171  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:27.614058  836363 cri.go:89] found id: ""
	I1210 06:40:27.614072  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.614079  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:27.614084  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:27.614142  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:27.639274  836363 cri.go:89] found id: ""
	I1210 06:40:27.639288  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.639296  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:27.639310  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:27.639369  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:27.667535  836363 cri.go:89] found id: ""
	I1210 06:40:27.667549  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.667556  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:27.667561  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:27.667630  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:27.691075  836363 cri.go:89] found id: ""
	I1210 06:40:27.691090  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.691097  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:27.691102  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:27.691161  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:27.716129  836363 cri.go:89] found id: ""
	I1210 06:40:27.716142  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.716150  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:27.716157  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:27.716168  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:27.771440  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:27.771460  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:27.788230  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:27.788248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:27.854509  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:27.846532   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.847111   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.848660   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.849114   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.850650   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:27.846532   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.847111   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.848660   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.849114   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.850650   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:27.854521  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:27.854533  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:27.922148  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:27.922172  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:30.451005  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:30.461920  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:30.461982  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:30.489712  836363 cri.go:89] found id: ""
	I1210 06:40:30.489727  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.489734  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:30.489739  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:30.489800  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:30.513093  836363 cri.go:89] found id: ""
	I1210 06:40:30.513107  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.513114  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:30.513119  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:30.513196  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:30.539836  836363 cri.go:89] found id: ""
	I1210 06:40:30.539850  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.539857  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:30.539862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:30.539921  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:30.563675  836363 cri.go:89] found id: ""
	I1210 06:40:30.563689  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.563696  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:30.563701  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:30.563768  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:30.587925  836363 cri.go:89] found id: ""
	I1210 06:40:30.587939  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.587946  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:30.587951  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:30.588014  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:30.612003  836363 cri.go:89] found id: ""
	I1210 06:40:30.612018  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.612025  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:30.612031  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:30.612094  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:30.640838  836363 cri.go:89] found id: ""
	I1210 06:40:30.640853  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.640860  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:30.640868  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:30.640879  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:30.696168  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:30.696189  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:30.712444  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:30.712461  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:30.779602  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:30.771019   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.771655   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773324   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773861   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.775400   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:30.771019   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.771655   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773324   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773861   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.775400   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:30.779612  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:30.779623  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:30.840751  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:30.840772  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:33.372644  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:33.382802  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:33.382862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:33.407793  836363 cri.go:89] found id: ""
	I1210 06:40:33.407807  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.407815  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:33.407820  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:33.407877  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:33.430878  836363 cri.go:89] found id: ""
	I1210 06:40:33.430892  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.430899  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:33.430904  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:33.430960  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:33.454595  836363 cri.go:89] found id: ""
	I1210 06:40:33.454609  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.454616  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:33.454621  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:33.454678  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:33.479328  836363 cri.go:89] found id: ""
	I1210 06:40:33.479342  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.479349  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:33.479354  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:33.479416  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:33.503717  836363 cri.go:89] found id: ""
	I1210 06:40:33.503731  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.503744  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:33.503750  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:33.503811  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:33.527968  836363 cri.go:89] found id: ""
	I1210 06:40:33.527982  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.527989  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:33.527994  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:33.528076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:33.552452  836363 cri.go:89] found id: ""
	I1210 06:40:33.552465  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.552472  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:33.552480  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:33.552490  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:33.586111  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:33.586127  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:33.644722  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:33.644742  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:33.663073  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:33.663090  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:33.731033  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:33.723109   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.723789   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725296   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725727   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.727228   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:33.723109   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.723789   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725296   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725727   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.727228   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:33.731044  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:33.731060  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:36.294593  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:36.306076  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:36.306134  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:36.334361  836363 cri.go:89] found id: ""
	I1210 06:40:36.334376  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.334383  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:36.334388  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:36.334447  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:36.361890  836363 cri.go:89] found id: ""
	I1210 06:40:36.361904  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.361911  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:36.361916  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:36.361977  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:36.387023  836363 cri.go:89] found id: ""
	I1210 06:40:36.387037  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.387044  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:36.387050  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:36.387109  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:36.411981  836363 cri.go:89] found id: ""
	I1210 06:40:36.411995  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.412011  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:36.412016  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:36.412085  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:36.436105  836363 cri.go:89] found id: ""
	I1210 06:40:36.436119  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.436136  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:36.436142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:36.436215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:36.463709  836363 cri.go:89] found id: ""
	I1210 06:40:36.463724  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.463731  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:36.463737  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:36.463795  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:36.492482  836363 cri.go:89] found id: ""
	I1210 06:40:36.492496  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.492503  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:36.492512  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:36.492522  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:36.551191  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:36.551210  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:36.568166  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:36.568183  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:36.635783  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:36.627478   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.627875   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629429   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629764   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.631231   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:36.627478   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.627875   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629429   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629764   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.631231   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:36.635793  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:36.635806  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:36.706158  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:36.706182  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:39.240421  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:39.250806  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:39.250867  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:39.275350  836363 cri.go:89] found id: ""
	I1210 06:40:39.275363  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.275370  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:39.275375  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:39.275431  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:39.309499  836363 cri.go:89] found id: ""
	I1210 06:40:39.309515  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.309522  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:39.309527  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:39.309605  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:39.335376  836363 cri.go:89] found id: ""
	I1210 06:40:39.335390  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.335397  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:39.335401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:39.335460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:39.364171  836363 cri.go:89] found id: ""
	I1210 06:40:39.364185  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.364192  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:39.364197  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:39.364261  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:39.390366  836363 cri.go:89] found id: ""
	I1210 06:40:39.390381  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.390388  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:39.390393  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:39.390456  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:39.418420  836363 cri.go:89] found id: ""
	I1210 06:40:39.418434  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.418441  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:39.418448  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:39.418525  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:39.443654  836363 cri.go:89] found id: ""
	I1210 06:40:39.443667  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.443674  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:39.443683  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:39.443693  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:39.508605  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:39.508627  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:39.541642  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:39.541657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:39.598637  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:39.598658  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:39.614821  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:39.614837  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:39.681178  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:39.672580   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.673146   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.674858   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.675468   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.677195   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:39.672580   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.673146   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.674858   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.675468   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.677195   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:42.181674  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:42.194020  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:42.194088  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:42.223014  836363 cri.go:89] found id: ""
	I1210 06:40:42.223033  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.223041  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:42.223053  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:42.223128  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:42.250171  836363 cri.go:89] found id: ""
	I1210 06:40:42.250186  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.250193  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:42.250199  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:42.250267  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:42.276322  836363 cri.go:89] found id: ""
	I1210 06:40:42.276343  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.276350  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:42.276356  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:42.276417  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:42.312287  836363 cri.go:89] found id: ""
	I1210 06:40:42.312302  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.312309  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:42.312314  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:42.312379  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:42.339930  836363 cri.go:89] found id: ""
	I1210 06:40:42.339944  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.339951  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:42.339956  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:42.340014  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:42.367830  836363 cri.go:89] found id: ""
	I1210 06:40:42.367844  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.367851  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:42.367857  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:42.367919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:42.392070  836363 cri.go:89] found id: ""
	I1210 06:40:42.392084  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.392091  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:42.392099  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:42.392109  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:42.426049  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:42.426065  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:42.481003  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:42.481025  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:42.497786  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:42.497804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:42.565103  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:42.556363   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.556746   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558351   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558980   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.560866   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:42.556363   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.556746   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558351   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558980   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.560866   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:42.565114  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:42.565124  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:45.129131  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:45.143244  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:45.143317  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:45.185169  836363 cri.go:89] found id: ""
	I1210 06:40:45.185203  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.185235  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:45.185259  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:45.185400  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:45.232743  836363 cri.go:89] found id: ""
	I1210 06:40:45.232760  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.232767  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:45.232774  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:45.232857  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:45.264531  836363 cri.go:89] found id: ""
	I1210 06:40:45.264564  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.264573  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:45.264585  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:45.264652  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:45.304876  836363 cri.go:89] found id: ""
	I1210 06:40:45.304891  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.304898  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:45.304912  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:45.304975  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:45.332686  836363 cri.go:89] found id: ""
	I1210 06:40:45.332700  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.332707  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:45.332713  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:45.332772  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:45.361418  836363 cri.go:89] found id: ""
	I1210 06:40:45.361443  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.361454  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:45.361460  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:45.361549  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:45.389935  836363 cri.go:89] found id: ""
	I1210 06:40:45.389949  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.389955  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:45.389963  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:45.389973  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:45.446063  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:45.446081  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:45.463171  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:45.463188  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:45.529007  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:45.520759   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.521319   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.522920   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.523417   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.524918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:45.520759   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.521319   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.522920   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.523417   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.524918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:45.529017  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:45.529027  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:45.596607  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:45.596629  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.127693  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:48.138167  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:48.138229  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:48.163699  836363 cri.go:89] found id: ""
	I1210 06:40:48.163713  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.163720  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:48.163726  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:48.163788  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:48.187478  836363 cri.go:89] found id: ""
	I1210 06:40:48.187491  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.187498  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:48.187503  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:48.187571  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:48.210551  836363 cri.go:89] found id: ""
	I1210 06:40:48.210565  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.210572  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:48.210577  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:48.210635  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:48.234710  836363 cri.go:89] found id: ""
	I1210 06:40:48.234723  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.234730  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:48.234735  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:48.234792  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:48.257754  836363 cri.go:89] found id: ""
	I1210 06:40:48.257767  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.257774  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:48.257779  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:48.257837  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:48.281482  836363 cri.go:89] found id: ""
	I1210 06:40:48.281497  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.281503  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:48.281508  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:48.281571  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:48.321472  836363 cri.go:89] found id: ""
	I1210 06:40:48.321486  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.321493  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:48.321501  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:48.321519  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.353157  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:48.353176  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:48.414214  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:48.414234  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:48.431305  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:48.431324  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:48.504839  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:48.496885   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.497412   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499192   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499575   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.501075   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:48.496885   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.497412   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499192   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499575   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.501075   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:48.504849  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:48.504860  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:51.069620  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:51.080075  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:51.080142  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:51.110642  836363 cri.go:89] found id: ""
	I1210 06:40:51.110656  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.110663  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:51.110668  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:51.110735  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:51.135875  836363 cri.go:89] found id: ""
	I1210 06:40:51.135889  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.135897  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:51.135902  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:51.135969  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:51.160992  836363 cri.go:89] found id: ""
	I1210 06:40:51.161007  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.161014  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:51.161019  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:51.161079  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:51.190942  836363 cri.go:89] found id: ""
	I1210 06:40:51.190957  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.190964  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:51.190969  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:51.191028  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:51.214853  836363 cri.go:89] found id: ""
	I1210 06:40:51.214866  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.214873  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:51.214878  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:51.214934  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:51.238972  836363 cri.go:89] found id: ""
	I1210 06:40:51.238986  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.238993  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:51.238998  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:51.239056  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:51.263101  836363 cri.go:89] found id: ""
	I1210 06:40:51.263115  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.263122  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:51.263130  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:51.263147  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:51.334552  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:51.325962   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.326878   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328565   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328869   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.330403   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:51.325962   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.326878   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328565   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328869   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.330403   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:51.334562  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:51.334574  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:51.405170  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:51.405189  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:51.433244  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:51.433261  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:51.491472  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:51.491494  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:54.008401  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:54.019572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:54.019640  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:54.049412  836363 cri.go:89] found id: ""
	I1210 06:40:54.049427  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.049434  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:54.049439  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:54.049505  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:54.074298  836363 cri.go:89] found id: ""
	I1210 06:40:54.074313  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.074319  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:54.074324  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:54.074384  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:54.102940  836363 cri.go:89] found id: ""
	I1210 06:40:54.102954  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.102961  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:54.102966  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:54.103030  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:54.127504  836363 cri.go:89] found id: ""
	I1210 06:40:54.127543  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.127556  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:54.127561  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:54.127619  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:54.156807  836363 cri.go:89] found id: ""
	I1210 06:40:54.156822  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.156829  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:54.156833  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:54.156896  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:54.181320  836363 cri.go:89] found id: ""
	I1210 06:40:54.181335  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.181342  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:54.181348  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:54.181406  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:54.205593  836363 cri.go:89] found id: ""
	I1210 06:40:54.205605  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.205612  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:54.205620  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:54.205631  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:54.222285  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:54.222301  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:54.288392  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:54.279932   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.280608   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282205   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282786   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.284468   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:54.279932   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.280608   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282205   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282786   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.284468   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:54.288402  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:54.288423  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:54.357504  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:54.357523  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:54.391376  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:54.391394  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:56.947968  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:56.957769  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:56.957833  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:56.981684  836363 cri.go:89] found id: ""
	I1210 06:40:56.981698  836363 logs.go:282] 0 containers: []
	W1210 06:40:56.981704  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:56.981709  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:56.981773  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:57.008321  836363 cri.go:89] found id: ""
	I1210 06:40:57.008336  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.008344  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:57.008348  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:57.008409  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:57.033150  836363 cri.go:89] found id: ""
	I1210 06:40:57.033164  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.033171  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:57.033175  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:57.033234  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:57.061083  836363 cri.go:89] found id: ""
	I1210 06:40:57.061096  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.061103  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:57.061108  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:57.061167  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:57.084352  836363 cri.go:89] found id: ""
	I1210 06:40:57.084366  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.084372  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:57.084377  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:57.084432  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:57.108194  836363 cri.go:89] found id: ""
	I1210 06:40:57.108225  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.108239  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:57.108244  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:57.108315  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:57.136912  836363 cri.go:89] found id: ""
	I1210 06:40:57.136926  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.136935  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:57.136942  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:57.136953  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:57.198446  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:57.198510  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:57.225389  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:57.225406  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:57.283570  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:57.283589  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:57.301703  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:57.301727  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:57.380612  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:57.372663   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.373165   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.374676   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.375061   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.376625   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:57.372663   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.373165   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.374676   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.375061   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.376625   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:59.880952  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:59.891486  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:59.891569  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:59.915927  836363 cri.go:89] found id: ""
	I1210 06:40:59.915941  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.915947  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:59.915953  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:59.916013  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:59.944178  836363 cri.go:89] found id: ""
	I1210 06:40:59.944192  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.944200  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:59.944205  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:59.944264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:59.969112  836363 cri.go:89] found id: ""
	I1210 06:40:59.969126  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.969133  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:59.969138  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:59.969201  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:59.994908  836363 cri.go:89] found id: ""
	I1210 06:40:59.994922  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.994929  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:59.994934  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:59.994991  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:00.092005  836363 cri.go:89] found id: ""
	I1210 06:41:00.092022  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.092030  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:00.092036  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:00.092110  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:00.176527  836363 cri.go:89] found id: ""
	I1210 06:41:00.176549  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.176557  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:00.176563  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:00.176628  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:00.227381  836363 cri.go:89] found id: ""
	I1210 06:41:00.227398  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.227406  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:00.227414  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:00.227427  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:00.330232  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:00.330255  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:00.363949  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:00.363967  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:00.445659  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:00.436629   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.437562   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439318   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439706   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.441418   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:00.436629   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.437562   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439318   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439706   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.441418   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:00.445669  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:00.445681  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:00.509415  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:00.509440  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:03.043380  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:03.053715  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:03.053796  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:03.079434  836363 cri.go:89] found id: ""
	I1210 06:41:03.079449  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.079456  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:03.079462  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:03.079520  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:03.112748  836363 cri.go:89] found id: ""
	I1210 06:41:03.112761  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.112768  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:03.112773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:03.112831  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:03.137303  836363 cri.go:89] found id: ""
	I1210 06:41:03.137317  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.137324  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:03.137329  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:03.137390  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:03.162303  836363 cri.go:89] found id: ""
	I1210 06:41:03.162317  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.162324  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:03.162329  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:03.162387  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:03.186423  836363 cri.go:89] found id: ""
	I1210 06:41:03.186438  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.186445  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:03.186449  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:03.186542  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:03.215070  836363 cri.go:89] found id: ""
	I1210 06:41:03.215084  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.215091  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:03.215096  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:03.215154  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:03.238820  836363 cri.go:89] found id: ""
	I1210 06:41:03.238834  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.238841  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:03.238850  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:03.238861  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:03.293835  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:03.293853  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:03.312548  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:03.312565  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:03.381504  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:03.373169   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.373896   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.375591   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.376023   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.377455   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:03.373169   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.373896   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.375591   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.376023   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.377455   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:03.381514  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:03.381524  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:03.444806  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:03.444826  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:05.972428  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:05.982168  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:05.982226  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:06.011191  836363 cri.go:89] found id: ""
	I1210 06:41:06.011206  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.011214  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:06.011220  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:06.011295  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:06.038921  836363 cri.go:89] found id: ""
	I1210 06:41:06.038937  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.038944  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:06.038949  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:06.039011  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:06.063412  836363 cri.go:89] found id: ""
	I1210 06:41:06.063426  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.063433  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:06.063438  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:06.063497  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:06.087777  836363 cri.go:89] found id: ""
	I1210 06:41:06.087800  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.087807  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:06.087812  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:06.087881  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:06.112794  836363 cri.go:89] found id: ""
	I1210 06:41:06.112809  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.112815  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:06.112821  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:06.112877  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:06.137620  836363 cri.go:89] found id: ""
	I1210 06:41:06.137634  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.137641  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:06.137645  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:06.137702  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:06.164245  836363 cri.go:89] found id: ""
	I1210 06:41:06.164259  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.164266  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:06.164274  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:06.164331  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:06.219975  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:06.219994  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:06.236571  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:06.236596  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:06.309920  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:06.294848   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.295676   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.297523   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.298004   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.299656   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:06.294848   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.295676   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.297523   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.298004   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.299656   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:06.309934  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:06.309944  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:06.383624  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:06.383646  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:08.911581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:08.923631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:08.923713  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:08.950073  836363 cri.go:89] found id: ""
	I1210 06:41:08.950087  836363 logs.go:282] 0 containers: []
	W1210 06:41:08.950094  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:08.950100  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:08.950157  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:08.976323  836363 cri.go:89] found id: ""
	I1210 06:41:08.976337  836363 logs.go:282] 0 containers: []
	W1210 06:41:08.976345  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:08.976349  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:08.976409  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:09.001975  836363 cri.go:89] found id: ""
	I1210 06:41:09.001991  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.001998  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:09.002004  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:09.002076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:09.027223  836363 cri.go:89] found id: ""
	I1210 06:41:09.027237  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.027250  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:09.027256  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:09.027314  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:09.051870  836363 cri.go:89] found id: ""
	I1210 06:41:09.051884  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.051890  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:09.051896  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:09.051955  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:09.075643  836363 cri.go:89] found id: ""
	I1210 06:41:09.075658  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.075678  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:09.075684  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:09.075740  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:09.100390  836363 cri.go:89] found id: ""
	I1210 06:41:09.100404  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.100411  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:09.100419  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:09.100430  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:09.164481  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:09.156151   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.156953   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.158536   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.159001   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.160652   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:09.156151   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.156953   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.158536   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.159001   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.160652   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:09.164492  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:09.164502  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:09.228784  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:09.228804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:09.256846  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:09.256863  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:09.312682  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:09.312702  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:11.842135  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:11.852673  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:11.852735  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:11.877129  836363 cri.go:89] found id: ""
	I1210 06:41:11.877144  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.877151  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:11.877156  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:11.877215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:11.902763  836363 cri.go:89] found id: ""
	I1210 06:41:11.902777  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.902784  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:11.902789  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:11.902863  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:11.927125  836363 cri.go:89] found id: ""
	I1210 06:41:11.927139  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.927146  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:11.927150  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:11.927206  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:11.966123  836363 cri.go:89] found id: ""
	I1210 06:41:11.966137  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.966144  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:11.966149  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:11.966205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:11.990046  836363 cri.go:89] found id: ""
	I1210 06:41:11.990059  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.990067  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:11.990072  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:11.990132  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:12.015096  836363 cri.go:89] found id: ""
	I1210 06:41:12.015111  836363 logs.go:282] 0 containers: []
	W1210 06:41:12.015118  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:12.015124  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:12.015185  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:12.040883  836363 cri.go:89] found id: ""
	I1210 06:41:12.040897  836363 logs.go:282] 0 containers: []
	W1210 06:41:12.040905  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:12.040912  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:12.040923  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:12.067975  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:12.067991  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:12.124161  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:12.124181  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:12.141074  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:12.141090  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:12.204309  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:12.196503   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.197043   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.198523   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.199003   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.200458   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:12.196503   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.197043   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.198523   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.199003   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.200458   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:12.204325  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:12.204336  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:14.770164  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:14.781008  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:14.781070  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:14.810029  836363 cri.go:89] found id: ""
	I1210 06:41:14.810042  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.810051  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:14.810056  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:14.810115  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:14.834988  836363 cri.go:89] found id: ""
	I1210 06:41:14.835002  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.835009  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:14.835015  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:14.835076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:14.859273  836363 cri.go:89] found id: ""
	I1210 06:41:14.859287  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.859294  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:14.859299  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:14.859358  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:14.884024  836363 cri.go:89] found id: ""
	I1210 06:41:14.884038  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.884045  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:14.884051  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:14.884111  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:14.907573  836363 cri.go:89] found id: ""
	I1210 06:41:14.907587  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.907596  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:14.907601  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:14.907660  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:14.932198  836363 cri.go:89] found id: ""
	I1210 06:41:14.932212  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.932219  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:14.932225  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:14.932285  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:14.957047  836363 cri.go:89] found id: ""
	I1210 06:41:14.957062  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.957069  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:14.957077  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:14.957087  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:15.015819  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:15.015841  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:15.035356  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:15.035387  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:15.111422  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:15.102537   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.103449   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105131   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105642   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.107373   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:15.102537   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.103449   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105131   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105642   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.107373   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:15.111434  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:15.111446  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:15.173911  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:15.173930  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:17.707403  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:17.717581  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:17.717645  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:17.741545  836363 cri.go:89] found id: ""
	I1210 06:41:17.741559  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.741566  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:17.741572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:17.741630  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:17.766133  836363 cri.go:89] found id: ""
	I1210 06:41:17.766147  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.766154  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:17.766159  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:17.766213  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:17.790714  836363 cri.go:89] found id: ""
	I1210 06:41:17.790728  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.790735  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:17.790740  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:17.790795  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:17.814639  836363 cri.go:89] found id: ""
	I1210 06:41:17.814653  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.814660  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:17.814666  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:17.814721  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:17.839269  836363 cri.go:89] found id: ""
	I1210 06:41:17.839283  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.839290  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:17.839295  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:17.839353  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:17.864188  836363 cri.go:89] found id: ""
	I1210 06:41:17.864202  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.864209  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:17.864214  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:17.864273  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:17.889103  836363 cri.go:89] found id: ""
	I1210 06:41:17.889117  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.889124  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:17.889132  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:17.889142  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:17.945534  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:17.945553  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:17.962119  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:17.962136  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:18.031737  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:18.022190   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.023153   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.024970   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.025609   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.027479   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:18.022190   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.023153   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.024970   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.025609   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.027479   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:18.031747  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:18.031758  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:18.095025  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:18.095045  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:20.626616  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:20.637064  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:20.637135  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:20.661085  836363 cri.go:89] found id: ""
	I1210 06:41:20.661098  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.661105  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:20.661110  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:20.661170  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:20.686407  836363 cri.go:89] found id: ""
	I1210 06:41:20.686420  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.686427  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:20.686432  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:20.686519  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:20.710905  836363 cri.go:89] found id: ""
	I1210 06:41:20.710919  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.710926  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:20.710931  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:20.710989  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:20.735241  836363 cri.go:89] found id: ""
	I1210 06:41:20.735255  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.735262  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:20.735268  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:20.735326  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:20.762996  836363 cri.go:89] found id: ""
	I1210 06:41:20.763010  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.763017  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:20.763022  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:20.763080  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:20.793084  836363 cri.go:89] found id: ""
	I1210 06:41:20.793098  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.793105  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:20.793111  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:20.793167  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:20.821259  836363 cri.go:89] found id: ""
	I1210 06:41:20.821274  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.821281  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:20.821289  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:20.821300  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:20.876655  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:20.876676  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:20.894043  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:20.894060  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:20.967195  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:20.958394   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.959075   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.960382   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.961013   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.962652   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:20.958394   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.959075   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.960382   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.961013   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.962652   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:20.967206  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:20.967217  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:21.028930  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:21.028949  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:23.559672  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:23.572318  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:23.572395  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:23.603800  836363 cri.go:89] found id: ""
	I1210 06:41:23.603814  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.603821  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:23.603827  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:23.603900  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:23.634190  836363 cri.go:89] found id: ""
	I1210 06:41:23.634205  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.634212  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:23.634217  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:23.634277  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:23.664876  836363 cri.go:89] found id: ""
	I1210 06:41:23.664890  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.664898  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:23.664904  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:23.664974  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:23.693167  836363 cri.go:89] found id: ""
	I1210 06:41:23.693182  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.693189  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:23.693196  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:23.693264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:23.719371  836363 cri.go:89] found id: ""
	I1210 06:41:23.719385  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.719393  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:23.719398  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:23.719460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:23.745307  836363 cri.go:89] found id: ""
	I1210 06:41:23.745321  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.745328  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:23.745334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:23.745399  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:23.773016  836363 cri.go:89] found id: ""
	I1210 06:41:23.773031  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.773038  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:23.773046  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:23.773056  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:23.829249  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:23.829268  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:23.846743  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:23.846761  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:23.915363  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:23.907095   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.907839   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.909482   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.910101   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.911295   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:23.907095   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.907839   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.909482   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.910101   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.911295   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:23.915374  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:23.915385  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:23.977818  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:23.977838  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:26.512080  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:26.522967  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:26.523031  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:26.556941  836363 cri.go:89] found id: ""
	I1210 06:41:26.556955  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.556962  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:26.556967  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:26.557028  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:26.583709  836363 cri.go:89] found id: ""
	I1210 06:41:26.583723  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.583731  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:26.583737  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:26.583794  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:26.620398  836363 cri.go:89] found id: ""
	I1210 06:41:26.620411  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.620418  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:26.620424  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:26.620488  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:26.645205  836363 cri.go:89] found id: ""
	I1210 06:41:26.645220  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.645227  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:26.645232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:26.645295  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:26.672971  836363 cri.go:89] found id: ""
	I1210 06:41:26.672985  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.672992  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:26.672996  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:26.673054  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:26.701966  836363 cri.go:89] found id: ""
	I1210 06:41:26.701980  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.701987  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:26.701993  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:26.702051  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:26.726241  836363 cri.go:89] found id: ""
	I1210 06:41:26.726254  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.726261  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:26.726269  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:26.726280  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:26.782519  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:26.782539  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:26.799105  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:26.799127  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:26.869131  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:26.860787   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.861476   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863184   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863795   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.865363   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:26.860787   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.861476   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863184   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863795   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.865363   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:26.869141  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:26.869152  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:26.935169  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:26.935188  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:29.463208  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:29.473355  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:29.473417  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:29.497493  836363 cri.go:89] found id: ""
	I1210 06:41:29.497512  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.497519  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:29.497524  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:29.497584  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:29.525346  836363 cri.go:89] found id: ""
	I1210 06:41:29.525360  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.525366  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:29.525381  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:29.525485  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:29.553583  836363 cri.go:89] found id: ""
	I1210 06:41:29.553596  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.553604  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:29.553609  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:29.553665  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:29.587462  836363 cri.go:89] found id: ""
	I1210 06:41:29.587476  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.587483  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:29.587488  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:29.587559  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:29.625152  836363 cri.go:89] found id: ""
	I1210 06:41:29.625166  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.625173  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:29.625178  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:29.625235  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:29.649760  836363 cri.go:89] found id: ""
	I1210 06:41:29.649773  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.649781  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:29.649786  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:29.649843  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:29.674875  836363 cri.go:89] found id: ""
	I1210 06:41:29.674889  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.674897  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:29.674904  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:29.674916  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:29.691346  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:29.691363  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:29.753565  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:29.745153   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.745754   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.747557   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.748093   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.749766   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:29.745153   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.745754   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.747557   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.748093   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.749766   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:29.753580  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:29.753591  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:29.815732  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:29.815751  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:29.848125  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:29.848141  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:32.408296  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:32.419204  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:32.419279  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:32.445527  836363 cri.go:89] found id: ""
	I1210 06:41:32.445542  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.445548  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:32.445553  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:32.445611  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:32.470075  836363 cri.go:89] found id: ""
	I1210 06:41:32.470088  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.470095  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:32.470108  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:32.470164  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:32.494632  836363 cri.go:89] found id: ""
	I1210 06:41:32.494647  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.494654  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:32.494658  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:32.494732  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:32.522542  836363 cri.go:89] found id: ""
	I1210 06:41:32.522555  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.522568  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:32.522574  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:32.522641  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:32.557483  836363 cri.go:89] found id: ""
	I1210 06:41:32.557498  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.557505  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:32.557511  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:32.557570  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:32.586583  836363 cri.go:89] found id: ""
	I1210 06:41:32.586598  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.586605  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:32.586611  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:32.586673  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:32.614984  836363 cri.go:89] found id: ""
	I1210 06:41:32.614997  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.615004  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:32.615012  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:32.615023  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:32.677103  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:32.669262   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.669805   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671272   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671743   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.673216   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:32.669262   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.669805   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671272   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671743   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.673216   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:32.677113  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:32.677123  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:32.738003  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:32.738022  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:32.765472  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:32.765488  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:32.822384  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:32.822406  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:35.339259  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:35.349700  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:35.349758  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:35.375337  836363 cri.go:89] found id: ""
	I1210 06:41:35.375359  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.375366  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:35.375371  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:35.375449  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:35.399613  836363 cri.go:89] found id: ""
	I1210 06:41:35.399627  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.399634  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:35.399639  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:35.399696  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:35.423561  836363 cri.go:89] found id: ""
	I1210 06:41:35.423575  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.423582  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:35.423588  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:35.423650  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:35.448165  836363 cri.go:89] found id: ""
	I1210 06:41:35.448179  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.448186  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:35.448198  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:35.448256  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:35.476970  836363 cri.go:89] found id: ""
	I1210 06:41:35.476984  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.476992  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:35.476997  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:35.477062  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:35.500993  836363 cri.go:89] found id: ""
	I1210 06:41:35.501007  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.501024  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:35.501029  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:35.501087  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:35.530273  836363 cri.go:89] found id: ""
	I1210 06:41:35.530294  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.530301  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:35.530309  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:35.530320  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:35.588229  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:35.588248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:35.608295  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:35.608311  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:35.673227  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:35.664447   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.665198   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667057   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667693   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.669348   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:35.664447   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.665198   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667057   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667693   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.669348   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:35.673237  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:35.673248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:35.735230  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:35.735250  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:38.262657  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:38.273339  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:38.273403  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:38.298561  836363 cri.go:89] found id: ""
	I1210 06:41:38.298576  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.298583  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:38.298588  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:38.298647  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:38.323273  836363 cri.go:89] found id: ""
	I1210 06:41:38.323294  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.323301  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:38.323306  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:38.323369  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:38.348694  836363 cri.go:89] found id: ""
	I1210 06:41:38.348709  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.348716  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:38.348721  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:38.348777  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:38.374030  836363 cri.go:89] found id: ""
	I1210 06:41:38.374044  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.374052  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:38.374057  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:38.374116  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:38.399116  836363 cri.go:89] found id: ""
	I1210 06:41:38.399130  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.399137  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:38.399142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:38.399205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:38.431922  836363 cri.go:89] found id: ""
	I1210 06:41:38.431936  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.431943  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:38.431954  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:38.432015  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:38.456101  836363 cri.go:89] found id: ""
	I1210 06:41:38.456115  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.456122  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:38.456130  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:38.456140  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:38.511923  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:38.511943  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:38.528342  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:38.528360  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:38.608737  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:38.599653   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.600438   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.601979   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.602518   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.604301   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:38.599653   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.600438   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.601979   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.602518   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.604301   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:38.608759  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:38.608770  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:38.671052  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:38.671073  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:41.199012  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:41.208683  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:41.208748  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:41.232632  836363 cri.go:89] found id: ""
	I1210 06:41:41.232645  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.232652  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:41.232657  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:41.232718  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:41.255309  836363 cri.go:89] found id: ""
	I1210 06:41:41.255322  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.255329  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:41.255334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:41.255388  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:41.279539  836363 cri.go:89] found id: ""
	I1210 06:41:41.279553  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.279560  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:41.279565  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:41.279636  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:41.306855  836363 cri.go:89] found id: ""
	I1210 06:41:41.306870  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.306877  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:41.306882  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:41.306943  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:41.331914  836363 cri.go:89] found id: ""
	I1210 06:41:41.331927  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.331933  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:41.331938  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:41.331998  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:41.355926  836363 cri.go:89] found id: ""
	I1210 06:41:41.355940  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.355947  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:41.355952  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:41.356022  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:41.380191  836363 cri.go:89] found id: ""
	I1210 06:41:41.380205  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.380213  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:41.380221  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:41.380237  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:41.396613  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:41.396631  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:41.460969  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:41.452836   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.453418   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455027   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455521   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.457097   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:41.452836   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.453418   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455027   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455521   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.457097   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:41.460979  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:41.460991  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:41.522046  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:41.522066  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:41.556015  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:41.556032  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:44.133635  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:44.143661  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:44.143725  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:44.170247  836363 cri.go:89] found id: ""
	I1210 06:41:44.170262  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.170269  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:44.170274  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:44.170341  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:44.195020  836363 cri.go:89] found id: ""
	I1210 06:41:44.195034  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.195040  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:44.195045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:44.195101  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:44.219352  836363 cri.go:89] found id: ""
	I1210 06:41:44.219366  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.219373  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:44.219378  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:44.219435  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:44.247508  836363 cri.go:89] found id: ""
	I1210 06:41:44.247522  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.247529  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:44.247534  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:44.247593  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:44.271983  836363 cri.go:89] found id: ""
	I1210 06:41:44.271997  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.272004  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:44.272009  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:44.272066  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:44.295908  836363 cri.go:89] found id: ""
	I1210 06:41:44.295922  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.295928  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:44.295934  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:44.295993  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:44.324246  836363 cri.go:89] found id: ""
	I1210 06:41:44.324260  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.324266  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:44.324275  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:44.324285  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:44.387028  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:44.387048  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:44.415316  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:44.415332  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:44.471125  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:44.471146  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:44.487999  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:44.488017  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:44.555772  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:44.542191   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.542827   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545275   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545612   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.547112   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:44.542191   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.542827   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545275   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545612   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.547112   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:47.056814  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:47.066882  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:47.066943  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:47.091827  836363 cri.go:89] found id: ""
	I1210 06:41:47.091841  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.091848  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:47.091853  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:47.091910  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:47.115556  836363 cri.go:89] found id: ""
	I1210 06:41:47.115571  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.115578  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:47.115583  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:47.115640  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:47.140381  836363 cri.go:89] found id: ""
	I1210 06:41:47.140395  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.140402  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:47.140407  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:47.140466  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:47.164584  836363 cri.go:89] found id: ""
	I1210 06:41:47.164599  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.164606  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:47.164611  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:47.164669  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:47.188952  836363 cri.go:89] found id: ""
	I1210 06:41:47.188966  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.188973  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:47.188978  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:47.189036  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:47.215501  836363 cri.go:89] found id: ""
	I1210 06:41:47.215515  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.215522  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:47.215528  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:47.215594  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:47.248270  836363 cri.go:89] found id: ""
	I1210 06:41:47.248284  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.248291  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:47.248301  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:47.248312  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:47.264763  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:47.264780  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:47.328736  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:47.319556   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.320385   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.321992   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.322636   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.324430   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:47.319556   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.320385   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.321992   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.322636   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.324430   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:47.328762  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:47.328773  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:47.391108  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:47.391129  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:47.421573  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:47.421590  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:49.978044  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:49.988396  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:49.988461  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:50.019406  836363 cri.go:89] found id: ""
	I1210 06:41:50.019422  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.019430  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:50.019436  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:50.019525  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:50.046394  836363 cri.go:89] found id: ""
	I1210 06:41:50.046409  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.046416  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:50.046421  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:50.046513  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:50.073199  836363 cri.go:89] found id: ""
	I1210 06:41:50.073213  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.073220  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:50.073225  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:50.073287  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:50.099702  836363 cri.go:89] found id: ""
	I1210 06:41:50.099716  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.099722  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:50.099728  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:50.099787  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:50.128872  836363 cri.go:89] found id: ""
	I1210 06:41:50.128886  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.128893  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:50.128898  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:50.128956  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:50.153319  836363 cri.go:89] found id: ""
	I1210 06:41:50.153333  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.153340  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:50.153346  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:50.153404  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:50.180949  836363 cri.go:89] found id: ""
	I1210 06:41:50.180962  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.180968  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:50.180976  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:50.180986  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:50.242900  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:50.242922  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:50.273618  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:50.273634  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:50.328466  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:50.328485  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:50.344888  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:50.344905  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:50.410799  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:50.402194   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.403126   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.404850   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.405142   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.406780   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:50.402194   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.403126   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.404850   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.405142   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.406780   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:52.911683  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:52.922118  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:52.922186  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:52.947907  836363 cri.go:89] found id: ""
	I1210 06:41:52.947922  836363 logs.go:282] 0 containers: []
	W1210 06:41:52.947930  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:52.947935  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:52.948002  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:52.974796  836363 cri.go:89] found id: ""
	I1210 06:41:52.974812  836363 logs.go:282] 0 containers: []
	W1210 06:41:52.974820  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:52.974826  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:52.974885  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:53.005919  836363 cri.go:89] found id: ""
	I1210 06:41:53.005935  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.005942  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:53.005950  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:53.006027  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:53.033320  836363 cri.go:89] found id: ""
	I1210 06:41:53.033333  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.033340  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:53.033345  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:53.033405  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:53.061819  836363 cri.go:89] found id: ""
	I1210 06:41:53.061834  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.061851  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:53.061857  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:53.061924  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:53.086290  836363 cri.go:89] found id: ""
	I1210 06:41:53.086304  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.086311  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:53.086316  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:53.086374  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:53.111667  836363 cri.go:89] found id: ""
	I1210 06:41:53.111681  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.111697  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:53.111706  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:53.111716  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:53.168392  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:53.168412  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:53.185807  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:53.185823  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:53.254387  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:53.246258   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.246805   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248540   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248996   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.250535   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:53.246258   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.246805   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248540   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248996   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.250535   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:53.254397  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:53.254408  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:53.319043  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:53.319063  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:55.851295  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:55.861334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:55.861402  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:55.886929  836363 cri.go:89] found id: ""
	I1210 06:41:55.886949  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.886957  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:55.886962  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:55.887020  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:55.915116  836363 cri.go:89] found id: ""
	I1210 06:41:55.915130  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.915138  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:55.915142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:55.915200  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:55.939013  836363 cri.go:89] found id: ""
	I1210 06:41:55.939033  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.939040  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:55.939045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:55.939101  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:55.964369  836363 cri.go:89] found id: ""
	I1210 06:41:55.964383  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.964390  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:55.964395  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:55.964455  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:55.989465  836363 cri.go:89] found id: ""
	I1210 06:41:55.989478  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.989485  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:55.989491  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:55.989557  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:56.014203  836363 cri.go:89] found id: ""
	I1210 06:41:56.014218  836363 logs.go:282] 0 containers: []
	W1210 06:41:56.014225  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:56.014230  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:56.014336  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:56.043892  836363 cri.go:89] found id: ""
	I1210 06:41:56.043906  836363 logs.go:282] 0 containers: []
	W1210 06:41:56.043916  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:56.043925  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:56.043936  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:56.112761  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:56.104681   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.105226   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.106816   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.107362   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.108899   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:56.104681   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.105226   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.106816   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.107362   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.108899   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:56.112770  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:56.112781  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:56.174642  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:56.174662  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:56.202947  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:56.202963  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:56.259062  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:56.259082  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:58.776033  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:58.786675  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:58.786737  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:58.822543  836363 cri.go:89] found id: ""
	I1210 06:41:58.822557  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.822563  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:58.822572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:58.822634  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:58.848835  836363 cri.go:89] found id: ""
	I1210 06:41:58.848850  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.848857  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:58.848862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:58.848919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:58.876530  836363 cri.go:89] found id: ""
	I1210 06:41:58.876544  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.876551  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:58.876556  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:58.876615  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:58.901700  836363 cri.go:89] found id: ""
	I1210 06:41:58.901714  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.901728  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:58.901733  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:58.901791  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:58.928495  836363 cri.go:89] found id: ""
	I1210 06:41:58.928509  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.928515  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:58.928520  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:58.928577  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:58.952415  836363 cri.go:89] found id: ""
	I1210 06:41:58.952428  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.952435  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:58.952440  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:58.952496  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:58.981756  836363 cri.go:89] found id: ""
	I1210 06:41:58.981771  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.981788  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:58.981797  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:58.981809  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:59.049361  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:59.041702   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.042693   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.043691   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.044312   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.045540   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:59.041702   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.042693   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.043691   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.044312   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.045540   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:59.049372  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:59.049382  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:59.111079  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:59.111098  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:59.141459  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:59.141474  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:59.199670  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:59.199691  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:01.716854  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:01.728404  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:01.728475  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:01.756029  836363 cri.go:89] found id: ""
	I1210 06:42:01.756042  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.756049  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:01.756054  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:01.756109  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:01.780969  836363 cri.go:89] found id: ""
	I1210 06:42:01.780983  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.780990  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:01.780995  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:01.781055  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:01.820198  836363 cri.go:89] found id: ""
	I1210 06:42:01.820212  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.820219  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:01.820224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:01.820284  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:01.848531  836363 cri.go:89] found id: ""
	I1210 06:42:01.848546  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.848553  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:01.848558  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:01.848617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:01.878420  836363 cri.go:89] found id: ""
	I1210 06:42:01.878433  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.878441  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:01.878448  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:01.878534  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:01.905311  836363 cri.go:89] found id: ""
	I1210 06:42:01.905325  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.905344  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:01.905350  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:01.905421  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:01.929912  836363 cri.go:89] found id: ""
	I1210 06:42:01.929926  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.929944  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:01.929953  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:01.929963  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:01.985928  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:01.985948  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:02.003638  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:02.003657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:02.075789  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:02.068334   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.068871   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070020   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070533   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.071984   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:02.068334   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.068871   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070020   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070533   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.071984   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:02.075800  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:02.075810  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:02.136779  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:02.136798  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:04.664122  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:04.675095  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:04.675159  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:04.699777  836363 cri.go:89] found id: ""
	I1210 06:42:04.699800  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.699808  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:04.699814  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:04.699911  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:04.724439  836363 cri.go:89] found id: ""
	I1210 06:42:04.724461  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.724468  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:04.724473  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:04.724538  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:04.750165  836363 cri.go:89] found id: ""
	I1210 06:42:04.750179  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.750187  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:04.750192  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:04.750260  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:04.775655  836363 cri.go:89] found id: ""
	I1210 06:42:04.775669  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.775676  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:04.775681  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:04.775740  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:04.805746  836363 cri.go:89] found id: ""
	I1210 06:42:04.805759  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.805776  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:04.805782  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:04.805849  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:04.836239  836363 cri.go:89] found id: ""
	I1210 06:42:04.836261  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.836269  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:04.836275  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:04.836344  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:04.862854  836363 cri.go:89] found id: ""
	I1210 06:42:04.862868  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.862875  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:04.862883  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:04.862893  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:04.922415  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:04.922435  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:04.939187  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:04.939203  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:05.006750  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:04.996413   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.996887   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998439   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998946   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:05.000828   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:04.996413   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.996887   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998439   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998946   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:05.000828   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:05.006762  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:05.006773  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:05.070511  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:05.070533  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:07.606355  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:07.617096  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:07.617156  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:07.642031  836363 cri.go:89] found id: ""
	I1210 06:42:07.642047  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.642054  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:07.642060  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:07.642117  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:07.670075  836363 cri.go:89] found id: ""
	I1210 06:42:07.670089  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.670107  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:07.670114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:07.670174  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:07.695503  836363 cri.go:89] found id: ""
	I1210 06:42:07.695517  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.695534  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:07.695539  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:07.695613  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:07.719792  836363 cri.go:89] found id: ""
	I1210 06:42:07.719805  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.719813  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:07.719818  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:07.719875  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:07.742885  836363 cri.go:89] found id: ""
	I1210 06:42:07.742899  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.742906  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:07.742911  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:07.742972  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:07.766658  836363 cri.go:89] found id: ""
	I1210 06:42:07.766672  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.766679  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:07.766684  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:07.766742  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:07.790890  836363 cri.go:89] found id: ""
	I1210 06:42:07.790917  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.790924  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:07.790932  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:07.790943  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:07.832030  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:07.832053  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:07.897794  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:07.897815  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:07.914747  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:07.914765  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:07.985400  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:07.977663   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.978174   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.979730   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.980136   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.981623   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:07.977663   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.978174   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.979730   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.980136   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.981623   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:07.985411  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:07.985422  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:10.549627  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:10.559818  836363 kubeadm.go:602] duration metric: took 4m3.540459063s to restartPrimaryControlPlane
	W1210 06:42:10.559885  836363 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:42:10.559961  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:42:10.971123  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:10.985022  836363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:42:10.992941  836363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:42:10.992994  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:42:11.001748  836363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:42:11.001760  836363 kubeadm.go:158] found existing configuration files:
	
	I1210 06:42:11.001824  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:42:11.011668  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:42:11.011736  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:42:11.019850  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:42:11.027722  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:42:11.027783  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:42:11.035605  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:42:11.043216  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:42:11.043273  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:42:11.050854  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:42:11.058765  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:42:11.058844  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:42:11.066934  836363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:42:11.105523  836363 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:42:11.105575  836363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:42:11.188151  836363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:42:11.188218  836363 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:42:11.188255  836363 kubeadm.go:319] OS: Linux
	I1210 06:42:11.188304  836363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:42:11.188354  836363 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:42:11.188398  836363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:42:11.188448  836363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:42:11.188493  836363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:42:11.188543  836363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:42:11.188590  836363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:42:11.188634  836363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:42:11.188683  836363 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:42:11.250124  836363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:42:11.250230  836363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:42:11.250322  836363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:42:11.255308  836363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:42:11.258775  836363 out.go:252]   - Generating certificates and keys ...
	I1210 06:42:11.258873  836363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:42:11.258950  836363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:42:11.259045  836363 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:42:11.259113  836363 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:42:11.259184  836363 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:42:11.259237  836363 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:42:11.259299  836363 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:42:11.259360  836363 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:42:11.259435  836363 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:42:11.259512  836363 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:42:11.259731  836363 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:42:11.259789  836363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:42:12.423232  836363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:42:12.577934  836363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:42:12.783953  836363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:42:13.093269  836363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:42:13.330460  836363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:42:13.331164  836363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:42:13.333749  836363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:42:13.336840  836363 out.go:252]   - Booting up control plane ...
	I1210 06:42:13.336937  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:42:13.337013  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:42:13.337083  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:42:13.358981  836363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:42:13.359103  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:42:13.368350  836363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:42:13.369623  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:42:13.370235  836363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:42:13.505873  836363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:42:13.506077  836363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:46:13.506731  836363 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00070392s
	I1210 06:46:13.506763  836363 kubeadm.go:319] 
	I1210 06:46:13.506850  836363 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:46:13.506894  836363 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:46:13.506999  836363 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:46:13.507005  836363 kubeadm.go:319] 
	I1210 06:46:13.507125  836363 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:46:13.507158  836363 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:46:13.507196  836363 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:46:13.507200  836363 kubeadm.go:319] 
	I1210 06:46:13.511687  836363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:46:13.512136  836363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:46:13.512245  836363 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:46:13.512495  836363 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:46:13.512501  836363 kubeadm.go:319] 
	I1210 06:46:13.512574  836363 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:46:13.512709  836363 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00070392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:46:13.512792  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:46:13.924248  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:46:13.937517  836363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:46:13.937579  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:46:13.945462  836363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:46:13.945471  836363 kubeadm.go:158] found existing configuration files:
	
	I1210 06:46:13.945523  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:46:13.953499  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:46:13.953555  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:46:13.961232  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:46:13.969190  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:46:13.969248  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:46:13.976966  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:46:13.984824  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:46:13.984878  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:46:13.992414  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:46:14.002049  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:46:14.002141  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:46:14.011865  836363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:46:14.052323  836363 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:46:14.052372  836363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:46:14.126225  836363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:46:14.126291  836363 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:46:14.126325  836363 kubeadm.go:319] OS: Linux
	I1210 06:46:14.126369  836363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:46:14.126415  836363 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:46:14.126482  836363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:46:14.126530  836363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:46:14.126577  836363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:46:14.126624  836363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:46:14.126668  836363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:46:14.126716  836363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:46:14.126761  836363 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:46:14.195770  836363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:46:14.195873  836363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:46:14.195962  836363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:46:14.202979  836363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:46:14.208298  836363 out.go:252]   - Generating certificates and keys ...
	I1210 06:46:14.208399  836363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:46:14.208478  836363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:46:14.208559  836363 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:46:14.208622  836363 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:46:14.208696  836363 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:46:14.208754  836363 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:46:14.208821  836363 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:46:14.208886  836363 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:46:14.208964  836363 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:46:14.209040  836363 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:46:14.209080  836363 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:46:14.209138  836363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:46:14.596166  836363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:46:14.891862  836363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:46:14.944957  836363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:46:15.236183  836363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:46:15.354206  836363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:46:15.354795  836363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:46:15.357335  836363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:46:15.360719  836363 out.go:252]   - Booting up control plane ...
	I1210 06:46:15.360814  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:46:15.360889  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:46:15.360954  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:46:15.381031  836363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:46:15.381140  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:46:15.389841  836363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:46:15.391023  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:46:15.391179  836363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:46:15.526794  836363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:46:15.526907  836363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:50:15.527073  836363 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000371584s
	I1210 06:50:15.527097  836363 kubeadm.go:319] 
	I1210 06:50:15.527182  836363 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:50:15.527235  836363 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:50:15.527340  836363 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:50:15.527347  836363 kubeadm.go:319] 
	I1210 06:50:15.527451  836363 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:50:15.527482  836363 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:50:15.527512  836363 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:50:15.527515  836363 kubeadm.go:319] 
	I1210 06:50:15.531196  836363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:50:15.531609  836363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:50:15.531716  836363 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:50:15.531977  836363 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:50:15.531981  836363 kubeadm.go:319] 
	I1210 06:50:15.532049  836363 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:50:15.532106  836363 kubeadm.go:403] duration metric: took 12m8.555678628s to StartCluster
	I1210 06:50:15.532150  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:15.532210  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:15.570548  836363 cri.go:89] found id: ""
	I1210 06:50:15.570562  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.570569  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:50:15.570575  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:50:15.570641  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:15.600057  836363 cri.go:89] found id: ""
	I1210 06:50:15.600071  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.600078  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:50:15.600083  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:50:15.600143  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:15.630207  836363 cri.go:89] found id: ""
	I1210 06:50:15.630221  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.630228  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:50:15.630232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:15.630288  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:15.654767  836363 cri.go:89] found id: ""
	I1210 06:50:15.654781  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.654788  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:50:15.654793  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:15.654853  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:15.678797  836363 cri.go:89] found id: ""
	I1210 06:50:15.678823  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.678830  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:15.678835  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:15.678895  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:15.707130  836363 cri.go:89] found id: ""
	I1210 06:50:15.707144  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.707151  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:50:15.707157  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:15.707215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:15.732682  836363 cri.go:89] found id: ""
	I1210 06:50:15.732696  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.732703  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:15.732711  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:15.732725  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:15.749626  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:15.749643  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:15.820658  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:50:15.811023   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.812026   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.813622   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.814217   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.815852   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:50:15.811023   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.812026   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.813622   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.814217   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.815852   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:15.820670  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:50:15.820682  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:50:15.883000  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:50:15.883021  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:15.913106  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:15.913122  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 06:50:15.972159  836363 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:50:15.972201  836363 out.go:285] * 
	W1210 06:50:15.972316  836363 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:15.972359  836363 out.go:285] * 
	W1210 06:50:15.974510  836363 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:50:15.979994  836363 out.go:203] 
	W1210 06:50:15.983642  836363 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:15.983686  836363 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:50:15.983706  836363 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:50:15.987432  836363 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445107196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445121990Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445162984Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445179287Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445188756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445200998Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445209959Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445223464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445238939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445267518Z" level=info msg="Connect containerd service"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445551476Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.446055950Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466617657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466678671Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466705092Z" level=info msg="Start subscribing containerd event"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466755874Z" level=info msg="Start recovering state"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511858771Z" level=info msg="Start event monitor"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511903539Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511912844Z" level=info msg="Start streaming server"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511923740Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511932676Z" level=info msg="runtime interface starting up..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511939502Z" level=info msg="starting plugins..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511951014Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:38:05 functional-534748 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.523710063Z" level=info msg="containerd successfully booted in 0.098844s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:52:44.338094   23224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:44.338769   23224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:44.340572   23224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:44.341174   23224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:44.343024   23224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:52:44 up  5:34,  0 user,  load average: 1.01, 0.41, 0.48
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:52:41 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:41 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 515.
	Dec 10 06:52:41 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:41 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:41 functional-534748 kubelet[23047]: E1210 06:52:41.852943   23047 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:41 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:41 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:42 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 516.
	Dec 10 06:52:42 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:42 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:42 functional-534748 kubelet[23081]: E1210 06:52:42.520949   23081 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:42 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:42 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:43 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 517.
	Dec 10 06:52:43 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:43 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:43 functional-534748 kubelet[23119]: E1210 06:52:43.364372   23119 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:43 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:43 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:44 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 518.
	Dec 10 06:52:44 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:44 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:44 functional-534748 kubelet[23156]: E1210 06:52:44.118484   23156 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:44 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:44 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (441.447118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-534748 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-534748 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (63.094712ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-534748 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-534748 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-534748 describe po hello-node-connect: exit status 1 (65.159705ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-534748 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-534748 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-534748 logs -l app=hello-node-connect: exit status 1 (95.659895ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-534748 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-534748 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-534748 describe svc hello-node-connect: exit status 1 (70.475478ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-534748 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (300.778527ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-534748 cache reload                                                                                                                               │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ ssh     │ functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │ 10 Dec 25 06:37 UTC │
	│ kubectl │ functional-534748 kubectl -- --context functional-534748 get pods                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:37 UTC │                     │
	│ start   │ -p functional-534748 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:38 UTC │                     │
	│ config  │ functional-534748 config unset cpus                                                                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ cp      │ functional-534748 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ config  │ functional-534748 config get cpus                                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	│ config  │ functional-534748 config set cpus 2                                                                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ config  │ functional-534748 config get cpus                                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ config  │ functional-534748 config unset cpus                                                                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ ssh     │ functional-534748 ssh -n functional-534748 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ config  │ functional-534748 config get cpus                                                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	│ ssh     │ functional-534748 ssh echo hello                                                                                                                             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ cp      │ functional-534748 cp functional-534748:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3903131200/001/cp-test.txt │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ ssh     │ functional-534748 ssh cat /etc/hostname                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ ssh     │ functional-534748 ssh -n functional-534748 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ tunnel  │ functional-534748 tunnel --alsologtostderr                                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	│ tunnel  │ functional-534748 tunnel --alsologtostderr                                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	│ cp      │ functional-534748 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ tunnel  │ functional-534748 tunnel --alsologtostderr                                                                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	│ ssh     │ functional-534748 ssh -n functional-534748 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ addons  │ functional-534748 addons list                                                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ addons  │ functional-534748 addons list -o json                                                                                                                        │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:38:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:38:02.996848  836363 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:38:02.996953  836363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:02.996957  836363 out.go:374] Setting ErrFile to fd 2...
	I1210 06:38:02.996961  836363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:02.997226  836363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:38:02.997576  836363 out.go:368] Setting JSON to false
	I1210 06:38:02.998612  836363 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19207,"bootTime":1765329476,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:38:02.998671  836363 start.go:143] virtualization:  
	I1210 06:38:03.004094  836363 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:38:03.007279  836363 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:38:03.007472  836363 notify.go:221] Checking for updates...
	I1210 06:38:03.013532  836363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:38:03.016433  836363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:38:03.019434  836363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:38:03.022270  836363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:38:03.025162  836363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:38:03.028574  836363 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:38:03.028673  836363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:38:03.063427  836363 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:38:03.063527  836363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:03.124292  836363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:38:03.114881143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:03.124387  836363 docker.go:319] overlay module found
	I1210 06:38:03.127603  836363 out.go:179] * Using the docker driver based on existing profile
	I1210 06:38:03.130606  836363 start.go:309] selected driver: docker
	I1210 06:38:03.130616  836363 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:03.130726  836363 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:38:03.130828  836363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:03.183470  836363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:38:03.17400928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:03.183897  836363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:38:03.183921  836363 cni.go:84] Creating CNI manager for ""
	I1210 06:38:03.183969  836363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:03.184018  836363 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:03.188981  836363 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
	I1210 06:38:03.191768  836363 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:38:03.194630  836363 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:38:03.197557  836363 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:38:03.197592  836363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:38:03.197600  836363 cache.go:65] Caching tarball of preloaded images
	I1210 06:38:03.197644  836363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:38:03.197695  836363 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 06:38:03.197704  836363 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:38:03.197812  836363 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
	I1210 06:38:03.219374  836363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:38:03.219395  836363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:38:03.219415  836363 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:38:03.219445  836363 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:03.219514  836363 start.go:364] duration metric: took 49.855µs to acquireMachinesLock for "functional-534748"
	I1210 06:38:03.219532  836363 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:38:03.219536  836363 fix.go:54] fixHost starting: 
	I1210 06:38:03.219816  836363 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
	I1210 06:38:03.236144  836363 fix.go:112] recreateIfNeeded on functional-534748: state=Running err=<nil>
	W1210 06:38:03.236163  836363 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:38:03.239412  836363 out.go:252] * Updating the running docker "functional-534748" container ...
	I1210 06:38:03.239438  836363 machine.go:94] provisionDockerMachine start ...
	I1210 06:38:03.239539  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.255986  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.256288  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.256294  836363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:38:03.393920  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:38:03.393934  836363 ubuntu.go:182] provisioning hostname "functional-534748"
	I1210 06:38:03.393994  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.411659  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.411963  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.411982  836363 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
	I1210 06:38:03.556341  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
	
	I1210 06:38:03.556409  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.574119  836363 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:03.574414  836363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I1210 06:38:03.574427  836363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:38:03.711044  836363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:38:03.711071  836363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 06:38:03.711104  836363 ubuntu.go:190] setting up certificates
	I1210 06:38:03.711119  836363 provision.go:84] configureAuth start
	I1210 06:38:03.711202  836363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:38:03.730176  836363 provision.go:143] copyHostCerts
	I1210 06:38:03.730250  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 06:38:03.730257  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 06:38:03.730338  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 06:38:03.730431  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 06:38:03.730435  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 06:38:03.730459  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 06:38:03.730669  836363 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 06:38:03.730673  836363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 06:38:03.730699  836363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 06:38:03.730787  836363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
	I1210 06:38:03.830346  836363 provision.go:177] copyRemoteCerts
	I1210 06:38:03.830399  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:38:03.830448  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:03.847359  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:03.942214  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 06:38:03.959615  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:38:03.976341  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:38:03.993197  836363 provision.go:87] duration metric: took 282.055172ms to configureAuth
	I1210 06:38:03.993214  836363 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:38:03.993400  836363 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:38:03.993405  836363 machine.go:97] duration metric: took 753.963524ms to provisionDockerMachine
	I1210 06:38:03.993412  836363 start.go:293] postStartSetup for "functional-534748" (driver="docker")
	I1210 06:38:03.993421  836363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:38:03.993478  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:38:03.993515  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.011825  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.110674  836363 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:38:04.114166  836363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:38:04.114184  836363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:38:04.114196  836363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 06:38:04.114252  836363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 06:38:04.114330  836363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 06:38:04.114407  836363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
	I1210 06:38:04.114451  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
	I1210 06:38:04.122085  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:38:04.140353  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
	I1210 06:38:04.160314  836363 start.go:296] duration metric: took 166.888171ms for postStartSetup
	I1210 06:38:04.160387  836363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:38:04.160439  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.179224  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.271903  836363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:38:04.277112  836363 fix.go:56] duration metric: took 1.057568371s for fixHost
	I1210 06:38:04.277129  836363 start.go:83] releasing machines lock for "functional-534748", held for 1.057608798s
	I1210 06:38:04.277219  836363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
	I1210 06:38:04.295104  836363 ssh_runner.go:195] Run: cat /version.json
	I1210 06:38:04.295130  836363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:38:04.295198  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.295203  836363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
	I1210 06:38:04.320108  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.320646  836363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
	I1210 06:38:04.418978  836363 ssh_runner.go:195] Run: systemctl --version
	I1210 06:38:04.509352  836363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:38:04.513794  836363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:38:04.513869  836363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:38:04.521471  836363 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:38:04.521486  836363 start.go:496] detecting cgroup driver to use...
	I1210 06:38:04.521523  836363 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:38:04.521580  836363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:38:04.537005  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:38:04.550809  836363 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:38:04.550892  836363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:38:04.567139  836363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:38:04.580704  836363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:38:04.697131  836363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:38:04.843057  836363 docker.go:234] disabling docker service ...
	I1210 06:38:04.843134  836363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:38:04.858243  836363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:38:04.871472  836363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:38:04.992555  836363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:38:05.113941  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:38:05.127335  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:38:05.141919  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:38:05.151900  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:38:05.161151  836363 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:38:05.161213  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:38:05.170764  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:05.180471  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:38:05.189238  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:05.197957  836363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:38:05.206107  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:38:05.215515  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:38:05.224555  836363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:38:05.233326  836363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:38:05.241235  836363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:38:05.248850  836363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:05.372410  836363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:38:05.513843  836363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:38:05.513915  836363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:38:05.519638  836363 start.go:564] Will wait 60s for crictl version
	I1210 06:38:05.519732  836363 ssh_runner.go:195] Run: which crictl
	I1210 06:38:05.524751  836363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:38:05.554788  836363 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:38:05.554852  836363 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:05.575345  836363 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:05.606405  836363 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 06:38:05.609314  836363 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:38:05.625429  836363 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:38:05.632180  836363 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:38:05.635024  836363 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:38:05.635199  836363 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:38:05.635275  836363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:05.663485  836363 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:38:05.663496  836363 containerd.go:534] Images already preloaded, skipping extraction
	I1210 06:38:05.663555  836363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:05.692188  836363 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 06:38:05.692214  836363 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:38:05.692220  836363 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1210 06:38:05.692316  836363 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:38:05.692382  836363 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:38:05.716412  836363 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:38:05.716430  836363 cni.go:84] Creating CNI manager for ""
	I1210 06:38:05.716438  836363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:05.716453  836363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:38:05.716479  836363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:38:05.716586  836363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-534748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:38:05.716652  836363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:38:05.724579  836363 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:38:05.724638  836363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:38:05.732044  836363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 06:38:05.744806  836363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:38:05.757235  836363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1210 06:38:05.769602  836363 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:38:05.773238  836363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:05.892525  836363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:38:06.296632  836363 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
	I1210 06:38:06.296643  836363 certs.go:195] generating shared ca certs ...
	I1210 06:38:06.296658  836363 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:38:06.296809  836363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 06:38:06.296849  836363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 06:38:06.296855  836363 certs.go:257] generating profile certs ...
	I1210 06:38:06.296937  836363 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
	I1210 06:38:06.297021  836363 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
	I1210 06:38:06.297068  836363 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
	I1210 06:38:06.297177  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 06:38:06.297208  836363 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 06:38:06.297216  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:38:06.297246  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 06:38:06.297268  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:38:06.297291  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 06:38:06.297337  836363 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 06:38:06.297938  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:38:06.317159  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:38:06.336653  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:38:06.357682  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:38:06.376860  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:38:06.394800  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:38:06.412862  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:38:06.430175  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:38:06.447717  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 06:38:06.465124  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:38:06.482520  836363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 06:38:06.500341  836363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:38:06.513157  836363 ssh_runner.go:195] Run: openssl version
	I1210 06:38:06.519293  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.526724  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 06:38:06.534054  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.537762  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.537817  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 06:38:06.579287  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:38:06.586741  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.593909  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:38:06.601430  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.605107  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.605174  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:06.646057  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:38:06.653276  836363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.660757  836363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 06:38:06.668784  836363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.672757  836363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.672825  836363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 06:38:06.713985  836363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:38:06.721257  836363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:38:06.724932  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:38:06.765952  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:38:06.807038  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:38:06.847752  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:38:06.890289  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:38:06.933893  836363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:38:06.976437  836363 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:06.976545  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:38:06.976606  836363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:07.011412  836363 cri.go:89] found id: ""
	I1210 06:38:07.011470  836363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:38:07.019342  836363 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:38:07.019351  836363 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:38:07.019420  836363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:38:07.026888  836363 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.027424  836363 kubeconfig.go:125] found "functional-534748" server: "https://192.168.49.2:8441"
	I1210 06:38:07.028660  836363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:38:07.037364  836363 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 06:23:31.333930823 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:38:05.762986837 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:38:07.037389  836363 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:38:07.037401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 06:38:07.037465  836363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:07.075015  836363 cri.go:89] found id: ""
	I1210 06:38:07.075109  836363 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:38:07.098429  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:07.106312  836363 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 06:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 06:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 06:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 10 06:27 /etc/kubernetes/scheduler.conf
	
	I1210 06:38:07.106367  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:38:07.114107  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:38:07.122067  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.122121  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:38:07.130176  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:38:07.138001  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.138055  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:07.145554  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:38:07.153390  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:07.153446  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:07.160768  836363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:07.168493  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:07.213471  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.026655  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.236384  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.298826  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:08.351741  836363 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:38:08.351821  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:08.852713  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.352205  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.852735  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:10.352309  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:10.851981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:11.352872  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:11.852826  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.352059  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.852894  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:13.352052  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:13.851883  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:14.351956  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:14.852642  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.352606  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.852015  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:16.352784  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:16.852024  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:17.351924  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:17.852941  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.352970  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.852320  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:19.352100  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:19.852911  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:20.352224  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:20.852520  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.352048  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.851954  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:22.352639  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:22.852718  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:23.352574  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:23.851998  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.352693  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.851979  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:25.352948  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:25.852529  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:26.351982  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:26.852421  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.352059  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.851955  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:28.351909  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:28.852783  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.352790  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.852562  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:30.352816  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:30.852170  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:31.352863  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:31.852962  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:32.351970  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:32.852604  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.352940  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.852377  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:34.352015  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:34.852768  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:35.352496  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:35.852012  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.351968  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.852867  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:37.351948  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:37.852026  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.351985  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.852728  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:39.351971  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:39.852981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:40.352705  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:40.852754  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:41.352353  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:41.852845  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.352945  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.852200  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.352581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.851999  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.352537  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.852152  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.352051  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.852697  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.352700  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.852741  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.351895  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.852042  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.352023  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.852686  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.352818  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.852006  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.354621  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.852814  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.352669  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.852320  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.352933  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.852726  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.352653  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.852593  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.352710  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.852520  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.352259  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.851929  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.352781  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.852568  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.352484  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.852171  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.352010  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.852803  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.352685  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.852017  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.353581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.852809  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.352585  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.852755  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.351981  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.851998  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.352045  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.851906  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.352316  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.852592  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.351976  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.852799  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.351972  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.852965  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.351946  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.852642  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:08.352868  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:08.352944  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:08.381205  836363 cri.go:89] found id: ""
	I1210 06:39:08.381219  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.381227  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:08.381232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:08.381288  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:08.404633  836363 cri.go:89] found id: ""
	I1210 06:39:08.404646  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.404654  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:08.404659  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:08.404721  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:08.428513  836363 cri.go:89] found id: ""
	I1210 06:39:08.428527  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.428534  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:08.428546  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:08.428606  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:08.453023  836363 cri.go:89] found id: ""
	I1210 06:39:08.453036  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.453043  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:08.453049  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:08.453105  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:08.481527  836363 cri.go:89] found id: ""
	I1210 06:39:08.481540  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.481547  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:08.481552  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:08.481609  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:08.506550  836363 cri.go:89] found id: ""
	I1210 06:39:08.506565  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.506580  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:08.506585  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:08.506649  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:08.531724  836363 cri.go:89] found id: ""
	I1210 06:39:08.531738  836363 logs.go:282] 0 containers: []
	W1210 06:39:08.531745  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:08.531752  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:08.531763  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:08.571815  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:08.571832  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:08.630094  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:08.630112  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:08.647317  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:08.647335  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:08.715592  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:08.706872   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.707541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709199   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709733   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.711367   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:08.706872   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.707541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709199   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.709733   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:08.711367   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:08.715603  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:08.715614  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:11.280652  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:11.290422  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:11.290516  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:11.314331  836363 cri.go:89] found id: ""
	I1210 06:39:11.314345  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.314352  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:11.314357  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:11.314419  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:11.337726  836363 cri.go:89] found id: ""
	I1210 06:39:11.337741  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.337747  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:11.337752  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:11.337812  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:11.365800  836363 cri.go:89] found id: ""
	I1210 06:39:11.365815  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.365821  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:11.365826  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:11.365886  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:11.394804  836363 cri.go:89] found id: ""
	I1210 06:39:11.394818  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.394825  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:11.394830  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:11.394887  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:11.419726  836363 cri.go:89] found id: ""
	I1210 06:39:11.419740  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.419746  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:11.419751  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:11.419810  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:11.445533  836363 cri.go:89] found id: ""
	I1210 06:39:11.445547  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.445554  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:11.445560  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:11.445618  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:11.470212  836363 cri.go:89] found id: ""
	I1210 06:39:11.470227  836363 logs.go:282] 0 containers: []
	W1210 06:39:11.470233  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:11.470241  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:11.470251  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:11.529183  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:11.529202  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:11.546384  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:11.546400  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:11.640312  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:11.631803   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.632307   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.633874   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635142   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635867   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:11.631803   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.632307   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.633874   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635142   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:11.635867   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:11.640322  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:11.640333  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:11.703828  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:11.703850  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:14.230665  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:14.241121  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:14.241183  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:14.268951  836363 cri.go:89] found id: ""
	I1210 06:39:14.268964  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.268974  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:14.268979  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:14.269035  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:14.292742  836363 cri.go:89] found id: ""
	I1210 06:39:14.292761  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.292768  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:14.292773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:14.292838  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:14.317527  836363 cri.go:89] found id: ""
	I1210 06:39:14.317540  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.317547  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:14.317552  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:14.317609  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:14.344738  836363 cri.go:89] found id: ""
	I1210 06:39:14.344751  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.344758  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:14.344764  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:14.344822  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:14.369086  836363 cri.go:89] found id: ""
	I1210 06:39:14.369101  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.369108  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:14.369114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:14.369172  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:14.393919  836363 cri.go:89] found id: ""
	I1210 06:39:14.393932  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.393938  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:14.393943  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:14.394005  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:14.418228  836363 cri.go:89] found id: ""
	I1210 06:39:14.418242  836363 logs.go:282] 0 containers: []
	W1210 06:39:14.418249  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:14.418257  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:14.418267  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:14.481544  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:14.481564  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:14.509051  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:14.509072  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:14.574238  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:14.574259  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:14.594306  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:14.594323  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:14.659264  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:14.651062   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.651501   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653284   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653744   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.655187   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:14.651062   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.651501   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653284   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.653744   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:14.655187   10956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:17.159960  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:17.169978  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:17.170036  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:17.194333  836363 cri.go:89] found id: ""
	I1210 06:39:17.194347  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.194354  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:17.194359  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:17.194418  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:17.218507  836363 cri.go:89] found id: ""
	I1210 06:39:17.218521  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.218528  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:17.218533  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:17.218617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:17.243499  836363 cri.go:89] found id: ""
	I1210 06:39:17.243513  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.243521  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:17.243527  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:17.243585  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:17.271019  836363 cri.go:89] found id: ""
	I1210 06:39:17.271034  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.271041  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:17.271048  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:17.271106  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:17.296491  836363 cri.go:89] found id: ""
	I1210 06:39:17.296506  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.296513  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:17.296517  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:17.296574  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:17.327127  836363 cri.go:89] found id: ""
	I1210 06:39:17.327142  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.327149  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:17.327156  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:17.327214  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:17.351001  836363 cri.go:89] found id: ""
	I1210 06:39:17.351016  836363 logs.go:282] 0 containers: []
	W1210 06:39:17.351023  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:17.351031  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:17.351046  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:17.408952  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:17.408971  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:17.425660  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:17.425676  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:17.495167  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:17.486424   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.487213   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.488883   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.489501   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.491179   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:17.486424   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.487213   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.488883   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.489501   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:17.491179   11041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:17.495179  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:17.495190  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:17.562848  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:17.562868  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:20.100845  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:20.111238  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:20.111303  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:20.135715  836363 cri.go:89] found id: ""
	I1210 06:39:20.135730  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.135737  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:20.135742  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:20.135849  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:20.162728  836363 cri.go:89] found id: ""
	I1210 06:39:20.162742  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.162750  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:20.162754  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:20.162817  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:20.186896  836363 cri.go:89] found id: ""
	I1210 06:39:20.186910  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.186918  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:20.186923  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:20.187033  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:20.211401  836363 cri.go:89] found id: ""
	I1210 06:39:20.211416  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.211423  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:20.211428  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:20.211494  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:20.241049  836363 cri.go:89] found id: ""
	I1210 06:39:20.241063  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.241071  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:20.241075  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:20.241136  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:20.264812  836363 cri.go:89] found id: ""
	I1210 06:39:20.264826  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.264833  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:20.264839  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:20.264905  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:20.289153  836363 cri.go:89] found id: ""
	I1210 06:39:20.289167  836363 logs.go:282] 0 containers: []
	W1210 06:39:20.289179  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:20.289187  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:20.289198  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:20.305825  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:20.305841  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:20.372702  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:20.364207   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.364892   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.366572   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.367140   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.368841   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:20.364207   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.364892   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.366572   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.367140   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:20.368841   11144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:20.372716  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:20.372727  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:20.434137  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:20.434156  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:20.462784  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:20.462801  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:23.020338  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:23.033250  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:23.033312  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:23.057227  836363 cri.go:89] found id: ""
	I1210 06:39:23.057241  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.057247  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:23.057252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:23.057310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:23.082261  836363 cri.go:89] found id: ""
	I1210 06:39:23.082275  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.082282  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:23.082287  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:23.082346  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:23.106424  836363 cri.go:89] found id: ""
	I1210 06:39:23.106438  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.106445  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:23.106451  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:23.106554  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:23.132399  836363 cri.go:89] found id: ""
	I1210 06:39:23.132414  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.132429  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:23.132435  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:23.132492  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:23.162454  836363 cri.go:89] found id: ""
	I1210 06:39:23.162494  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.162501  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:23.162507  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:23.162581  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:23.187219  836363 cri.go:89] found id: ""
	I1210 06:39:23.187233  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.187240  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:23.187245  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:23.187310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:23.212781  836363 cri.go:89] found id: ""
	I1210 06:39:23.212795  836363 logs.go:282] 0 containers: []
	W1210 06:39:23.212802  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:23.212809  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:23.212821  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:23.269301  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:23.269321  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:23.286019  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:23.286034  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:23.349588  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:23.342068   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.342600   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344048   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344478   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.345899   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:23.342068   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.342600   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344048   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.344478   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:23.345899   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:23.349598  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:23.349608  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:23.410637  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:23.410657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:25.946659  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:25.956427  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:25.956484  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:25.980198  836363 cri.go:89] found id: ""
	I1210 06:39:25.980212  836363 logs.go:282] 0 containers: []
	W1210 06:39:25.980219  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:25.980224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:25.980282  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:26.007385  836363 cri.go:89] found id: ""
	I1210 06:39:26.007400  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.007408  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:26.007413  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:26.007504  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:26.036729  836363 cri.go:89] found id: ""
	I1210 06:39:26.036743  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.036750  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:26.036755  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:26.036816  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:26.062224  836363 cri.go:89] found id: ""
	I1210 06:39:26.062238  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.062245  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:26.062250  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:26.062310  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:26.087647  836363 cri.go:89] found id: ""
	I1210 06:39:26.087661  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.087668  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:26.087682  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:26.087742  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:26.111730  836363 cri.go:89] found id: ""
	I1210 06:39:26.111744  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.111751  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:26.111756  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:26.111815  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:26.140490  836363 cri.go:89] found id: ""
	I1210 06:39:26.140504  836363 logs.go:282] 0 containers: []
	W1210 06:39:26.140511  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:26.140525  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:26.140534  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:26.196200  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:26.196219  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:26.212571  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:26.212587  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:26.273577  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:26.265176   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.265699   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267151   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267679   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.269363   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:26.265176   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.265699   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267151   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.267679   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:26.269363   11354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:26.273590  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:26.273603  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:26.335078  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:26.335098  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:28.869553  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:28.880899  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:28.880964  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:28.906428  836363 cri.go:89] found id: ""
	I1210 06:39:28.906442  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.906449  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:28.906454  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:28.906544  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:28.931886  836363 cri.go:89] found id: ""
	I1210 06:39:28.931900  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.931908  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:28.931912  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:28.931973  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:28.961315  836363 cri.go:89] found id: ""
	I1210 06:39:28.961329  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.961336  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:28.961340  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:28.961401  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:28.986397  836363 cri.go:89] found id: ""
	I1210 06:39:28.986411  836363 logs.go:282] 0 containers: []
	W1210 06:39:28.986419  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:28.986425  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:28.986507  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:29.012532  836363 cri.go:89] found id: ""
	I1210 06:39:29.012546  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.012554  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:29.012559  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:29.012617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:29.041722  836363 cri.go:89] found id: ""
	I1210 06:39:29.041736  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.041744  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:29.041749  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:29.041810  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:29.067638  836363 cri.go:89] found id: ""
	I1210 06:39:29.067652  836363 logs.go:282] 0 containers: []
	W1210 06:39:29.067660  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:29.067675  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:29.067686  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:29.123932  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:29.123951  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:29.140346  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:29.140363  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:29.205033  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:29.196885   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.197511   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199079   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199683   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.201215   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:29.196885   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.197511   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199079   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.199683   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:29.201215   11459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:29.205044  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:29.205056  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:29.268564  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:29.268592  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:31.797415  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:31.810439  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:31.810560  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:31.839718  836363 cri.go:89] found id: ""
	I1210 06:39:31.839731  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.839738  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:31.839743  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:31.839812  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:31.866887  836363 cri.go:89] found id: ""
	I1210 06:39:31.866901  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.866908  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:31.866913  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:31.866971  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:31.896088  836363 cri.go:89] found id: ""
	I1210 06:39:31.896102  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.896109  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:31.896114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:31.896183  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:31.920769  836363 cri.go:89] found id: ""
	I1210 06:39:31.920783  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.920790  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:31.920804  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:31.920870  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:31.944941  836363 cri.go:89] found id: ""
	I1210 06:39:31.944955  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.944973  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:31.944979  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:31.945062  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:31.969699  836363 cri.go:89] found id: ""
	I1210 06:39:31.969713  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.969719  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:31.969734  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:31.969796  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:31.994263  836363 cri.go:89] found id: ""
	I1210 06:39:31.994288  836363 logs.go:282] 0 containers: []
	W1210 06:39:31.994296  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:31.994305  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:31.994315  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:32.051337  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:32.051358  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:32.068506  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:32.068524  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:32.133010  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:32.124121   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.124862   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.126702   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.127174   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.128721   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:32.124121   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.124862   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.126702   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.127174   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:32.128721   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:32.133022  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:32.133032  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:32.195411  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:32.195432  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:34.725830  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:34.736154  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:34.736227  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:34.760592  836363 cri.go:89] found id: ""
	I1210 06:39:34.760606  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.760613  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:34.760618  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:34.760679  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:34.789194  836363 cri.go:89] found id: ""
	I1210 06:39:34.789208  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.789215  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:34.789220  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:34.789290  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:34.821768  836363 cri.go:89] found id: ""
	I1210 06:39:34.821783  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.821798  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:34.821804  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:34.821862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:34.851156  836363 cri.go:89] found id: ""
	I1210 06:39:34.851182  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.851190  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:34.851195  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:34.851262  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:34.881339  836363 cri.go:89] found id: ""
	I1210 06:39:34.881353  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.881361  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:34.881366  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:34.881439  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:34.906857  836363 cri.go:89] found id: ""
	I1210 06:39:34.906871  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.906878  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:34.906884  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:34.906950  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:34.935793  836363 cri.go:89] found id: ""
	I1210 06:39:34.935807  836363 logs.go:282] 0 containers: []
	W1210 06:39:34.935814  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:34.935822  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:34.935832  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:34.993322  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:34.993345  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:35.011292  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:35.011309  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:35.078043  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:35.069050   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070041   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070728   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.072495   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.073080   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:35.069050   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070041   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.070728   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.072495   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:35.073080   11670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:35.078052  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:35.078063  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:35.146644  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:35.146671  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:37.678658  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:37.688848  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:37.688925  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:37.713621  836363 cri.go:89] found id: ""
	I1210 06:39:37.713635  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.713642  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:37.713647  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:37.713706  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:37.738638  836363 cri.go:89] found id: ""
	I1210 06:39:37.738651  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.738658  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:37.738663  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:37.738728  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:37.767364  836363 cri.go:89] found id: ""
	I1210 06:39:37.767378  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.767385  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:37.767390  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:37.767446  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:37.804827  836363 cri.go:89] found id: ""
	I1210 06:39:37.804841  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.804848  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:37.804854  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:37.804911  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:37.830424  836363 cri.go:89] found id: ""
	I1210 06:39:37.830438  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.830445  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:37.830449  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:37.830529  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:37.862851  836363 cri.go:89] found id: ""
	I1210 06:39:37.862864  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.862871  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:37.862876  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:37.862933  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:37.887629  836363 cri.go:89] found id: ""
	I1210 06:39:37.887643  836363 logs.go:282] 0 containers: []
	W1210 06:39:37.887650  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:37.887686  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:37.887698  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:37.946033  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:37.946053  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:37.962951  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:37.962969  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:38.030263  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:38.021061   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.021797   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.022740   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.024684   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.025056   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:38.021061   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.021797   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.022740   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.024684   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:38.025056   11775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:38.030274  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:38.030285  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:38.093462  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:38.093482  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:40.622687  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:40.632840  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:40.632902  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:40.657235  836363 cri.go:89] found id: ""
	I1210 06:39:40.657248  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.657255  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:40.657261  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:40.657320  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:40.681835  836363 cri.go:89] found id: ""
	I1210 06:39:40.681849  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.681857  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:40.681862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:40.681919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:40.708085  836363 cri.go:89] found id: ""
	I1210 06:39:40.708099  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.708106  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:40.708111  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:40.708172  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:40.734852  836363 cri.go:89] found id: ""
	I1210 06:39:40.734867  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.734874  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:40.734879  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:40.734937  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:40.760765  836363 cri.go:89] found id: ""
	I1210 06:39:40.760779  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.760786  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:40.760791  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:40.760862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:40.785777  836363 cri.go:89] found id: ""
	I1210 06:39:40.785791  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.785797  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:40.785802  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:40.785862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:40.812943  836363 cri.go:89] found id: ""
	I1210 06:39:40.812957  836363 logs.go:282] 0 containers: []
	W1210 06:39:40.812963  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:40.812971  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:40.812981  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:40.882713  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:40.874213   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.874907   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876393   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876781   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.878311   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:40.874213   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.874907   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876393   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.876781   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:40.878311   11875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:40.882724  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:40.882746  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:40.946502  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:40.946522  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:40.973695  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:40.973711  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:41.028086  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:41.028105  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:43.544743  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:43.554582  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:43.554639  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:43.578394  836363 cri.go:89] found id: ""
	I1210 06:39:43.578408  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.578415  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:43.578421  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:43.578501  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:43.602120  836363 cri.go:89] found id: ""
	I1210 06:39:43.602134  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.602141  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:43.602152  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:43.602211  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:43.626641  836363 cri.go:89] found id: ""
	I1210 06:39:43.626655  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.626662  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:43.626666  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:43.626730  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:43.650792  836363 cri.go:89] found id: ""
	I1210 06:39:43.650805  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.650812  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:43.650817  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:43.650875  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:43.676181  836363 cri.go:89] found id: ""
	I1210 06:39:43.676195  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.676201  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:43.676207  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:43.676264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:43.700288  836363 cri.go:89] found id: ""
	I1210 06:39:43.700301  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.700308  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:43.700317  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:43.700376  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:43.723140  836363 cri.go:89] found id: ""
	I1210 06:39:43.723154  836363 logs.go:282] 0 containers: []
	W1210 06:39:43.723161  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:43.723169  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:43.723179  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:43.777323  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:43.777344  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:43.793764  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:43.793781  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:43.876520  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:43.868105   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.868820   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870334   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870859   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.872328   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:43.868105   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.868820   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870334   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.870859   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:43.872328   11983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:43.876531  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:43.876546  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:43.937962  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:43.937982  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:46.471232  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:46.481349  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:46.481414  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:46.505604  836363 cri.go:89] found id: ""
	I1210 06:39:46.505618  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.505625  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:46.505631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:46.505693  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:46.530584  836363 cri.go:89] found id: ""
	I1210 06:39:46.530598  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.530605  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:46.530610  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:46.530667  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:46.555675  836363 cri.go:89] found id: ""
	I1210 06:39:46.555689  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.555696  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:46.555701  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:46.555758  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:46.579225  836363 cri.go:89] found id: ""
	I1210 06:39:46.579240  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.579246  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:46.579252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:46.579309  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:46.603318  836363 cri.go:89] found id: ""
	I1210 06:39:46.603332  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.603339  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:46.603344  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:46.603400  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:46.628198  836363 cri.go:89] found id: ""
	I1210 06:39:46.628212  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.628219  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:46.628224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:46.628280  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:46.651425  836363 cri.go:89] found id: ""
	I1210 06:39:46.651439  836363 logs.go:282] 0 containers: []
	W1210 06:39:46.651446  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:46.651454  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:46.651464  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:46.706345  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:46.706364  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:46.722718  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:46.722733  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:46.788441  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:46.780334   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.780989   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.782563   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.783115   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.784714   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:46.780334   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.780989   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.782563   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.783115   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:46.784714   12083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:46.788461  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:46.788474  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:46.856250  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:46.856269  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:49.385907  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:49.395772  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:49.395833  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:49.419273  836363 cri.go:89] found id: ""
	I1210 06:39:49.419286  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.419294  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:49.419299  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:49.419357  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:49.444546  836363 cri.go:89] found id: ""
	I1210 06:39:49.444560  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.444567  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:49.444572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:49.444634  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:49.469099  836363 cri.go:89] found id: ""
	I1210 06:39:49.469113  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.469120  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:49.469125  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:49.469182  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:49.497447  836363 cri.go:89] found id: ""
	I1210 06:39:49.497461  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.497468  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:49.497473  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:49.497531  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:49.521614  836363 cri.go:89] found id: ""
	I1210 06:39:49.521628  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.521635  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:49.521640  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:49.521700  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:49.546324  836363 cri.go:89] found id: ""
	I1210 06:39:49.546338  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.546345  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:49.546351  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:49.546408  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:49.569503  836363 cri.go:89] found id: ""
	I1210 06:39:49.569516  836363 logs.go:282] 0 containers: []
	W1210 06:39:49.569523  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:49.569531  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:49.569541  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:49.625182  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:49.625201  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:49.641754  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:49.641772  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:49.705447  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:49.697491   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.698134   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.699780   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.700234   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.701724   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:49.697491   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.698134   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.699780   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.700234   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:49.701724   12185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:49.705457  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:49.705478  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:49.766615  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:49.766634  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:52.302628  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:52.312769  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:52.312832  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:52.338228  836363 cri.go:89] found id: ""
	I1210 06:39:52.338242  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.338249  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:52.338254  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:52.338315  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:52.363997  836363 cri.go:89] found id: ""
	I1210 06:39:52.364011  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.364018  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:52.364024  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:52.364083  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:52.389867  836363 cri.go:89] found id: ""
	I1210 06:39:52.389881  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.389888  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:52.389894  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:52.389959  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:52.416171  836363 cri.go:89] found id: ""
	I1210 06:39:52.416186  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.416193  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:52.416199  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:52.416262  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:52.440036  836363 cri.go:89] found id: ""
	I1210 06:39:52.440051  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.440058  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:52.440064  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:52.440127  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:52.465173  836363 cri.go:89] found id: ""
	I1210 06:39:52.465188  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.465195  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:52.465200  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:52.465266  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:52.490275  836363 cri.go:89] found id: ""
	I1210 06:39:52.490289  836363 logs.go:282] 0 containers: []
	W1210 06:39:52.490296  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:52.490304  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:52.490316  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:52.507524  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:52.507541  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:52.572947  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:52.565302   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.565716   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567214   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567524   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.569003   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:52.565302   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.565716   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567214   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.567524   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:52.569003   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:52.572957  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:52.572967  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:52.639898  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:52.639920  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:52.671836  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:52.671853  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:55.228555  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:55.238632  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:55.238692  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:55.262819  836363 cri.go:89] found id: ""
	I1210 06:39:55.262833  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.262840  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:55.262845  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:55.262903  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:55.287262  836363 cri.go:89] found id: ""
	I1210 06:39:55.287276  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.287282  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:55.287287  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:55.287347  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:55.312064  836363 cri.go:89] found id: ""
	I1210 06:39:55.312077  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.312084  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:55.312089  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:55.312147  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:55.340546  836363 cri.go:89] found id: ""
	I1210 06:39:55.340560  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.340566  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:55.340572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:55.340638  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:55.369203  836363 cri.go:89] found id: ""
	I1210 06:39:55.369217  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.369224  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:55.369229  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:55.369294  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:55.394186  836363 cri.go:89] found id: ""
	I1210 06:39:55.394200  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.394213  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:55.394218  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:55.394275  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:55.418250  836363 cri.go:89] found id: ""
	I1210 06:39:55.418264  836363 logs.go:282] 0 containers: []
	W1210 06:39:55.418271  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:55.418279  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:55.418293  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:55.449481  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:55.449497  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:55.505651  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:55.505670  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:55.522722  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:55.522739  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:55.595372  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:55.580192   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.580773   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.588978   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.589842   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.591512   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:55.580192   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.580773   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.588978   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.589842   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:55.591512   12400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:55.595383  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:55.595396  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:58.156956  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:58.167095  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:58.167157  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:58.191075  836363 cri.go:89] found id: ""
	I1210 06:39:58.191089  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.191096  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:39:58.191101  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:58.191161  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:58.219145  836363 cri.go:89] found id: ""
	I1210 06:39:58.219159  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.219166  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:39:58.219171  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:58.219230  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:58.243820  836363 cri.go:89] found id: ""
	I1210 06:39:58.243834  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.243841  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:39:58.243846  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:58.243903  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:58.273220  836363 cri.go:89] found id: ""
	I1210 06:39:58.273234  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.273241  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:39:58.273246  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:58.273306  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:58.296744  836363 cri.go:89] found id: ""
	I1210 06:39:58.296758  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.296765  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:58.296770  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:58.296826  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:58.321374  836363 cri.go:89] found id: ""
	I1210 06:39:58.321389  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.321395  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:39:58.321401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:58.321460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:58.345587  836363 cri.go:89] found id: ""
	I1210 06:39:58.345601  836363 logs.go:282] 0 containers: []
	W1210 06:39:58.345607  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:58.345615  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:58.345626  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:58.363238  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:58.363255  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:58.430409  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:39:58.422109   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.422784   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.424524   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.425019   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.426627   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:39:58.422109   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.422784   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.424524   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.425019   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:39:58.426627   12494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:58.430420  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:58.430439  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:58.492984  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:39:58.493002  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:58.520139  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:58.520155  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:01.076701  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:01.088176  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:01.088237  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:01.115625  836363 cri.go:89] found id: ""
	I1210 06:40:01.115641  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.115648  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:01.115653  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:01.115713  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:01.142756  836363 cri.go:89] found id: ""
	I1210 06:40:01.142771  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.142779  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:01.142784  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:01.142854  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:01.174021  836363 cri.go:89] found id: ""
	I1210 06:40:01.174036  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.174043  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:01.174048  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:01.174115  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:01.200639  836363 cri.go:89] found id: ""
	I1210 06:40:01.200654  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.200661  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:01.200667  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:01.200729  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:01.225759  836363 cri.go:89] found id: ""
	I1210 06:40:01.225772  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.225779  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:01.225785  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:01.225851  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:01.250911  836363 cri.go:89] found id: ""
	I1210 06:40:01.250926  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.250934  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:01.250940  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:01.251003  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:01.279325  836363 cri.go:89] found id: ""
	I1210 06:40:01.279339  836363 logs.go:282] 0 containers: []
	W1210 06:40:01.279347  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:01.279355  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:01.279366  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:01.335352  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:01.335371  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:01.352578  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:01.352596  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:01.422752  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:01.414308   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.415520   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417210   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417554   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.418810   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:01.414308   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.415520   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417210   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.417554   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:01.418810   12602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:01.422763  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:01.422778  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:01.484637  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:01.484658  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:04.016723  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:04.027134  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:04.027199  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:04.058110  836363 cri.go:89] found id: ""
	I1210 06:40:04.058123  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.058131  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:04.058136  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:04.058194  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:04.085839  836363 cri.go:89] found id: ""
	I1210 06:40:04.085853  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.085859  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:04.085874  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:04.085938  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:04.112846  836363 cri.go:89] found id: ""
	I1210 06:40:04.112870  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.112877  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:04.112884  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:04.112952  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:04.144605  836363 cri.go:89] found id: ""
	I1210 06:40:04.144619  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.144626  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:04.144631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:04.144698  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:04.170078  836363 cri.go:89] found id: ""
	I1210 06:40:04.170093  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.170111  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:04.170116  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:04.170187  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:04.195493  836363 cri.go:89] found id: ""
	I1210 06:40:04.195560  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.195568  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:04.195573  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:04.195663  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:04.224488  836363 cri.go:89] found id: ""
	I1210 06:40:04.224502  836363 logs.go:282] 0 containers: []
	W1210 06:40:04.224509  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:04.224518  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:04.224528  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:04.280631  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:04.280651  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:04.297645  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:04.297663  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:04.366830  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:04.356738   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.357139   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.358834   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.359561   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.361366   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:04.356738   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.357139   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.358834   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.359561   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:04.361366   12708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:04.366842  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:04.366854  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:04.430241  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:04.430260  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:06.963156  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:06.973415  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:06.973480  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:06.997210  836363 cri.go:89] found id: ""
	I1210 06:40:06.997223  836363 logs.go:282] 0 containers: []
	W1210 06:40:06.997230  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:06.997235  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:06.997292  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:07.024360  836363 cri.go:89] found id: ""
	I1210 06:40:07.024374  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.024381  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:07.024386  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:07.024443  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:07.056844  836363 cri.go:89] found id: ""
	I1210 06:40:07.056857  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.056864  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:07.056869  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:07.056926  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:07.095983  836363 cri.go:89] found id: ""
	I1210 06:40:07.095997  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.096004  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:07.096010  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:07.096080  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:07.126932  836363 cri.go:89] found id: ""
	I1210 06:40:07.126947  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.126954  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:07.126958  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:07.127020  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:07.151807  836363 cri.go:89] found id: ""
	I1210 06:40:07.151823  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.151831  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:07.151835  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:07.151895  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:07.175459  836363 cri.go:89] found id: ""
	I1210 06:40:07.175473  836363 logs.go:282] 0 containers: []
	W1210 06:40:07.175480  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:07.175489  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:07.175499  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:07.229963  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:07.229984  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:07.249632  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:07.249654  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:07.314011  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:07.306140   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.306891   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308497   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308812   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.310262   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:07.306140   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.306891   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308497   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.308812   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:07.310262   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:07.314022  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:07.314034  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:07.376148  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:07.376173  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:09.907917  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:09.918267  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:09.918339  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:09.946634  836363 cri.go:89] found id: ""
	I1210 06:40:09.946648  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.946654  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:09.946660  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:09.946729  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:09.971532  836363 cri.go:89] found id: ""
	I1210 06:40:09.971546  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.971553  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:09.971558  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:09.971633  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:09.995748  836363 cri.go:89] found id: ""
	I1210 06:40:09.995762  836363 logs.go:282] 0 containers: []
	W1210 06:40:09.995768  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:09.995773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:09.995832  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:10.026807  836363 cri.go:89] found id: ""
	I1210 06:40:10.026821  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.026828  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:10.026834  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:10.026902  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:10.060800  836363 cri.go:89] found id: ""
	I1210 06:40:10.060815  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.060822  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:10.060831  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:10.060896  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:10.092175  836363 cri.go:89] found id: ""
	I1210 06:40:10.092190  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.092200  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:10.092205  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:10.092267  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:10.121165  836363 cri.go:89] found id: ""
	I1210 06:40:10.121179  836363 logs.go:282] 0 containers: []
	W1210 06:40:10.121187  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:10.121197  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:10.121208  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:10.137742  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:10.137761  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:10.202959  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:10.194457   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.195204   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.196954   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.197466   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.199061   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:10.194457   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.195204   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.196954   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.197466   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:10.199061   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:10.202970  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:10.202993  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:10.263838  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:10.263860  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:10.290431  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:10.290450  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:12.845609  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:12.856045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:12.856108  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:12.881725  836363 cri.go:89] found id: ""
	I1210 06:40:12.881740  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.881756  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:12.881762  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:12.881836  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:12.905554  836363 cri.go:89] found id: ""
	I1210 06:40:12.905568  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.905575  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:12.905580  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:12.905636  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:12.929343  836363 cri.go:89] found id: ""
	I1210 06:40:12.929357  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.929363  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:12.929369  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:12.929427  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:12.958063  836363 cri.go:89] found id: ""
	I1210 06:40:12.958077  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.958083  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:12.958089  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:12.958153  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:12.982226  836363 cri.go:89] found id: ""
	I1210 06:40:12.982240  836363 logs.go:282] 0 containers: []
	W1210 06:40:12.982247  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:12.982252  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:12.982309  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:13.008275  836363 cri.go:89] found id: ""
	I1210 06:40:13.008296  836363 logs.go:282] 0 containers: []
	W1210 06:40:13.008304  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:13.008309  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:13.008376  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:13.032141  836363 cri.go:89] found id: ""
	I1210 06:40:13.032155  836363 logs.go:282] 0 containers: []
	W1210 06:40:13.032161  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:13.032169  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:13.032180  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:13.094529  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:13.094550  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:13.112774  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:13.112794  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:13.177133  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:13.169073   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.169669   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171355   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171773   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.173232   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:13.169073   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.169669   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171355   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.171773   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:13.173232   13024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:13.177142  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:13.177157  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:13.237784  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:13.237804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:15.773100  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:15.783808  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:15.783870  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:15.808779  836363 cri.go:89] found id: ""
	I1210 06:40:15.808792  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.808799  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:15.808811  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:15.808873  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:15.835122  836363 cri.go:89] found id: ""
	I1210 06:40:15.835136  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.835143  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:15.835147  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:15.835205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:15.859608  836363 cri.go:89] found id: ""
	I1210 06:40:15.859622  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.859630  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:15.859635  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:15.859698  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:15.884617  836363 cri.go:89] found id: ""
	I1210 06:40:15.884631  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.884637  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:15.884648  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:15.884708  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:15.917645  836363 cri.go:89] found id: ""
	I1210 06:40:15.917659  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.917666  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:15.917671  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:15.917738  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:15.942216  836363 cri.go:89] found id: ""
	I1210 06:40:15.942230  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.942237  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:15.942246  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:15.942306  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:15.969023  836363 cri.go:89] found id: ""
	I1210 06:40:15.969038  836363 logs.go:282] 0 containers: []
	W1210 06:40:15.969045  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:15.969053  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:15.969065  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:16.025303  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:16.025322  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:16.043036  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:16.043055  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:16.124792  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:16.116191   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.116732   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118445   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118910   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.120594   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:16.116191   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.116732   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118445   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.118910   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:16.120594   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:16.124803  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:16.124829  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:16.187018  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:16.187038  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:18.721268  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:18.732117  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:18.732179  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:18.759703  836363 cri.go:89] found id: ""
	I1210 06:40:18.759717  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.759724  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:18.759729  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:18.759803  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:18.785469  836363 cri.go:89] found id: ""
	I1210 06:40:18.785482  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.785492  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:18.785497  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:18.785556  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:18.809013  836363 cri.go:89] found id: ""
	I1210 06:40:18.809026  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.809033  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:18.809038  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:18.809100  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:18.837693  836363 cri.go:89] found id: ""
	I1210 06:40:18.837707  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.837714  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:18.837719  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:18.837777  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:18.862280  836363 cri.go:89] found id: ""
	I1210 06:40:18.862294  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.862300  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:18.862306  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:18.862366  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:18.887552  836363 cri.go:89] found id: ""
	I1210 06:40:18.887566  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.887573  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:18.887578  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:18.887644  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:18.912972  836363 cri.go:89] found id: ""
	I1210 06:40:18.912987  836363 logs.go:282] 0 containers: []
	W1210 06:40:18.912994  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:18.913002  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:18.913020  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:18.968777  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:18.968818  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:18.987249  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:18.987267  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:19.053510  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:19.044510   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.045135   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047145   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047494   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.049061   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:19.044510   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.045135   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047145   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.047494   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:19.049061   13226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:19.053536  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:19.053548  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:19.127699  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:19.127719  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:21.655771  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:21.665930  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:21.665996  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:21.690403  836363 cri.go:89] found id: ""
	I1210 06:40:21.690417  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.690424  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:21.690429  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:21.690526  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:21.716021  836363 cri.go:89] found id: ""
	I1210 06:40:21.716035  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.716042  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:21.716047  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:21.716110  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:21.740524  836363 cri.go:89] found id: ""
	I1210 06:40:21.740538  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.740545  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:21.740551  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:21.740610  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:21.764686  836363 cri.go:89] found id: ""
	I1210 06:40:21.764699  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.764706  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:21.764711  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:21.764768  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:21.789476  836363 cri.go:89] found id: ""
	I1210 06:40:21.789490  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.789497  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:21.789502  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:21.789567  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:21.815957  836363 cri.go:89] found id: ""
	I1210 06:40:21.815973  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.815981  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:21.815986  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:21.816046  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:21.844568  836363 cri.go:89] found id: ""
	I1210 06:40:21.844582  836363 logs.go:282] 0 containers: []
	W1210 06:40:21.844589  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:21.844597  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:21.844607  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:21.900940  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:21.900960  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:21.919059  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:21.919078  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:21.988088  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:21.979038   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.979792   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981425   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981947   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.983526   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:21.979038   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.979792   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981425   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.981947   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:21.983526   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:21.988098  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:21.988109  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:22.051814  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:22.051834  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:24.585034  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:24.595723  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:24.595789  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:24.624873  836363 cri.go:89] found id: ""
	I1210 06:40:24.624888  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.624895  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:24.624900  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:24.624966  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:24.649543  836363 cri.go:89] found id: ""
	I1210 06:40:24.649557  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.649564  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:24.649570  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:24.649680  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:24.675056  836363 cri.go:89] found id: ""
	I1210 06:40:24.675080  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.675088  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:24.675093  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:24.675154  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:24.700453  836363 cri.go:89] found id: ""
	I1210 06:40:24.700466  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.700474  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:24.700479  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:24.700537  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:24.726867  836363 cri.go:89] found id: ""
	I1210 06:40:24.726881  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.726887  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:24.726893  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:24.726955  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:24.751980  836363 cri.go:89] found id: ""
	I1210 06:40:24.751994  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.752002  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:24.752007  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:24.752068  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:24.782328  836363 cri.go:89] found id: ""
	I1210 06:40:24.782342  836363 logs.go:282] 0 containers: []
	W1210 06:40:24.782349  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:24.782357  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:24.782367  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:24.845411  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:24.845431  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:24.874554  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:24.874571  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:24.930797  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:24.930817  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:24.947891  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:24.947910  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:25.021562  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:25.011415   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.012701   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.013093   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.014987   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.015492   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:25.011415   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.012701   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.013093   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.014987   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:25.015492   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:27.522215  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:27.533345  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:27.533449  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:27.562516  836363 cri.go:89] found id: ""
	I1210 06:40:27.562529  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.562538  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:27.562543  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:27.562612  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:27.589053  836363 cri.go:89] found id: ""
	I1210 06:40:27.589081  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.589089  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:27.589098  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:27.589171  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:27.614058  836363 cri.go:89] found id: ""
	I1210 06:40:27.614072  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.614079  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:27.614084  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:27.614142  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:27.639274  836363 cri.go:89] found id: ""
	I1210 06:40:27.639288  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.639296  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:27.639310  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:27.639369  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:27.667535  836363 cri.go:89] found id: ""
	I1210 06:40:27.667549  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.667556  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:27.667561  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:27.667630  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:27.691075  836363 cri.go:89] found id: ""
	I1210 06:40:27.691090  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.691097  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:27.691102  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:27.691161  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:27.716129  836363 cri.go:89] found id: ""
	I1210 06:40:27.716142  836363 logs.go:282] 0 containers: []
	W1210 06:40:27.716150  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:27.716157  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:27.716168  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:27.771440  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:27.771460  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:27.788230  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:27.788248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:27.854509  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:27.846532   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.847111   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.848660   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.849114   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.850650   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:27.846532   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.847111   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.848660   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.849114   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:27.850650   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:27.854521  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:27.854533  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:27.922148  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:27.922172  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:30.451005  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:30.461920  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:30.461982  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:30.489712  836363 cri.go:89] found id: ""
	I1210 06:40:30.489727  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.489734  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:30.489739  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:30.489800  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:30.513093  836363 cri.go:89] found id: ""
	I1210 06:40:30.513107  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.513114  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:30.513119  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:30.513196  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:30.539836  836363 cri.go:89] found id: ""
	I1210 06:40:30.539850  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.539857  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:30.539862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:30.539921  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:30.563675  836363 cri.go:89] found id: ""
	I1210 06:40:30.563689  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.563696  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:30.563701  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:30.563768  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:30.587925  836363 cri.go:89] found id: ""
	I1210 06:40:30.587939  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.587946  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:30.587951  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:30.588014  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:30.612003  836363 cri.go:89] found id: ""
	I1210 06:40:30.612018  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.612025  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:30.612031  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:30.612094  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:30.640838  836363 cri.go:89] found id: ""
	I1210 06:40:30.640853  836363 logs.go:282] 0 containers: []
	W1210 06:40:30.640860  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:30.640868  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:30.640879  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:30.696168  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:30.696189  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:30.712444  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:30.712461  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:30.779602  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:30.771019   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.771655   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773324   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773861   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.775400   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:30.771019   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.771655   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773324   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.773861   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:30.775400   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:30.779612  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:30.779623  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:30.840751  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:30.840772  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:33.372644  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:33.382802  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:33.382862  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:33.407793  836363 cri.go:89] found id: ""
	I1210 06:40:33.407807  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.407815  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:33.407820  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:33.407877  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:33.430878  836363 cri.go:89] found id: ""
	I1210 06:40:33.430892  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.430899  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:33.430904  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:33.430960  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:33.454595  836363 cri.go:89] found id: ""
	I1210 06:40:33.454609  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.454616  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:33.454621  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:33.454678  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:33.479328  836363 cri.go:89] found id: ""
	I1210 06:40:33.479342  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.479349  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:33.479354  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:33.479416  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:33.503717  836363 cri.go:89] found id: ""
	I1210 06:40:33.503731  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.503744  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:33.503750  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:33.503811  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:33.527968  836363 cri.go:89] found id: ""
	I1210 06:40:33.527982  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.527989  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:33.527994  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:33.528076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:33.552452  836363 cri.go:89] found id: ""
	I1210 06:40:33.552465  836363 logs.go:282] 0 containers: []
	W1210 06:40:33.552472  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:33.552480  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:33.552490  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:33.586111  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:33.586127  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:33.644722  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:33.644742  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:33.663073  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:33.663090  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:33.731033  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:33.723109   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.723789   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725296   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725727   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.727228   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:33.723109   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.723789   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725296   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.725727   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:33.727228   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:33.731044  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:33.731060  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:36.294593  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:36.306076  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:36.306134  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:36.334361  836363 cri.go:89] found id: ""
	I1210 06:40:36.334376  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.334383  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:36.334388  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:36.334447  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:36.361890  836363 cri.go:89] found id: ""
	I1210 06:40:36.361904  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.361911  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:36.361916  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:36.361977  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:36.387023  836363 cri.go:89] found id: ""
	I1210 06:40:36.387037  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.387044  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:36.387050  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:36.387109  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:36.411981  836363 cri.go:89] found id: ""
	I1210 06:40:36.411995  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.412011  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:36.412016  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:36.412085  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:36.436105  836363 cri.go:89] found id: ""
	I1210 06:40:36.436119  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.436136  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:36.436142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:36.436215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:36.463709  836363 cri.go:89] found id: ""
	I1210 06:40:36.463724  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.463731  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:36.463737  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:36.463795  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:36.492482  836363 cri.go:89] found id: ""
	I1210 06:40:36.492496  836363 logs.go:282] 0 containers: []
	W1210 06:40:36.492503  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:36.492512  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:36.492522  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:36.551191  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:36.551210  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:36.568166  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:36.568183  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:36.635783  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:36.627478   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.627875   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629429   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629764   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.631231   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:36.627478   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.627875   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629429   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.629764   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:36.631231   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:36.635793  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:36.635806  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:36.706158  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:36.706182  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:39.240421  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:39.250806  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:39.250867  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:39.275350  836363 cri.go:89] found id: ""
	I1210 06:40:39.275363  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.275370  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:39.275375  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:39.275431  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:39.309499  836363 cri.go:89] found id: ""
	I1210 06:40:39.309515  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.309522  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:39.309527  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:39.309605  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:39.335376  836363 cri.go:89] found id: ""
	I1210 06:40:39.335390  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.335397  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:39.335401  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:39.335460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:39.364171  836363 cri.go:89] found id: ""
	I1210 06:40:39.364185  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.364192  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:39.364197  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:39.364261  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:39.390366  836363 cri.go:89] found id: ""
	I1210 06:40:39.390381  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.390388  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:39.390393  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:39.390456  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:39.418420  836363 cri.go:89] found id: ""
	I1210 06:40:39.418434  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.418441  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:39.418448  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:39.418525  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:39.443654  836363 cri.go:89] found id: ""
	I1210 06:40:39.443667  836363 logs.go:282] 0 containers: []
	W1210 06:40:39.443674  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:39.443683  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:39.443693  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:39.508605  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:39.508627  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:39.541642  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:39.541657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:39.598637  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:39.598658  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:39.614821  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:39.614837  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:39.681178  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:39.672580   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.673146   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.674858   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.675468   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.677195   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:39.672580   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.673146   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.674858   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.675468   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:39.677195   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:42.181674  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:42.194020  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:42.194088  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:42.223014  836363 cri.go:89] found id: ""
	I1210 06:40:42.223033  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.223041  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:42.223053  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:42.223128  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:42.250171  836363 cri.go:89] found id: ""
	I1210 06:40:42.250186  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.250193  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:42.250199  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:42.250267  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:42.276322  836363 cri.go:89] found id: ""
	I1210 06:40:42.276343  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.276350  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:42.276356  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:42.276417  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:42.312287  836363 cri.go:89] found id: ""
	I1210 06:40:42.312302  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.312309  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:42.312314  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:42.312379  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:42.339930  836363 cri.go:89] found id: ""
	I1210 06:40:42.339944  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.339951  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:42.339956  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:42.340014  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:42.367830  836363 cri.go:89] found id: ""
	I1210 06:40:42.367844  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.367851  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:42.367857  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:42.367919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:42.392070  836363 cri.go:89] found id: ""
	I1210 06:40:42.392084  836363 logs.go:282] 0 containers: []
	W1210 06:40:42.392091  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:42.392099  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:42.392109  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:42.426049  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:42.426065  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:42.481003  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:42.481025  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:42.497786  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:42.497804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:42.565103  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:42.556363   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.556746   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558351   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558980   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.560866   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:42.556363   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.556746   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558351   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.558980   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:42.560866   14076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:42.565114  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:42.565124  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:45.129131  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:45.143244  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:45.143317  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:45.185169  836363 cri.go:89] found id: ""
	I1210 06:40:45.185203  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.185235  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:45.185259  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:45.185400  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:45.232743  836363 cri.go:89] found id: ""
	I1210 06:40:45.232760  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.232767  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:45.232774  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:45.232857  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:45.264531  836363 cri.go:89] found id: ""
	I1210 06:40:45.264564  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.264573  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:45.264585  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:45.264652  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:45.304876  836363 cri.go:89] found id: ""
	I1210 06:40:45.304891  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.304898  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:45.304912  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:45.304975  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:45.332686  836363 cri.go:89] found id: ""
	I1210 06:40:45.332700  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.332707  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:45.332713  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:45.332772  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:45.361418  836363 cri.go:89] found id: ""
	I1210 06:40:45.361443  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.361454  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:45.361460  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:45.361549  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:45.389935  836363 cri.go:89] found id: ""
	I1210 06:40:45.389949  836363 logs.go:282] 0 containers: []
	W1210 06:40:45.389955  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:45.389963  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:45.389973  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:45.446063  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:45.446081  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:45.463171  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:45.463188  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:45.529007  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:45.520759   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.521319   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.522920   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.523417   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.524918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:45.520759   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.521319   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.522920   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.523417   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:45.524918   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:45.529017  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:45.529027  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:45.596607  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:45.596629  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.127693  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:48.138167  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:48.138229  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:48.163699  836363 cri.go:89] found id: ""
	I1210 06:40:48.163713  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.163720  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:48.163726  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:48.163788  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:48.187478  836363 cri.go:89] found id: ""
	I1210 06:40:48.187491  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.187498  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:48.187503  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:48.187571  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:48.210551  836363 cri.go:89] found id: ""
	I1210 06:40:48.210565  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.210572  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:48.210577  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:48.210635  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:48.234710  836363 cri.go:89] found id: ""
	I1210 06:40:48.234723  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.234730  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:48.234735  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:48.234792  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:48.257754  836363 cri.go:89] found id: ""
	I1210 06:40:48.257767  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.257774  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:48.257779  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:48.257837  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:48.281482  836363 cri.go:89] found id: ""
	I1210 06:40:48.281497  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.281503  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:48.281508  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:48.281571  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:48.321472  836363 cri.go:89] found id: ""
	I1210 06:40:48.321486  836363 logs.go:282] 0 containers: []
	W1210 06:40:48.321493  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:48.321501  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:48.321519  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.353157  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:48.353176  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:48.414214  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:48.414234  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:48.431305  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:48.431324  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:48.504839  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:48.496885   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.497412   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499192   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499575   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.501075   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:48.496885   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.497412   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499192   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.499575   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:48.501075   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:48.504849  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:48.504860  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:51.069620  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:51.080075  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:51.080142  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:51.110642  836363 cri.go:89] found id: ""
	I1210 06:40:51.110656  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.110663  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:51.110668  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:51.110735  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:51.135875  836363 cri.go:89] found id: ""
	I1210 06:40:51.135889  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.135897  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:51.135902  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:51.135969  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:51.160992  836363 cri.go:89] found id: ""
	I1210 06:40:51.161007  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.161014  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:51.161019  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:51.161079  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:51.190942  836363 cri.go:89] found id: ""
	I1210 06:40:51.190957  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.190964  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:51.190969  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:51.191028  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:51.214853  836363 cri.go:89] found id: ""
	I1210 06:40:51.214866  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.214873  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:51.214878  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:51.214934  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:51.238972  836363 cri.go:89] found id: ""
	I1210 06:40:51.238986  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.238993  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:51.238998  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:51.239056  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:51.263101  836363 cri.go:89] found id: ""
	I1210 06:40:51.263115  836363 logs.go:282] 0 containers: []
	W1210 06:40:51.263122  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:51.263130  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:51.263147  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:51.334552  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:51.325962   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.326878   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328565   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328869   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.330403   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:51.325962   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.326878   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328565   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.328869   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:51.330403   14362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:51.334562  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:51.334574  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:51.405170  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:51.405189  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:51.433244  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:51.433261  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:51.491472  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:51.491494  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:54.008401  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:54.019572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:54.019640  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:54.049412  836363 cri.go:89] found id: ""
	I1210 06:40:54.049427  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.049434  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:54.049439  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:54.049505  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:54.074298  836363 cri.go:89] found id: ""
	I1210 06:40:54.074313  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.074319  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:54.074324  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:54.074384  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:54.102940  836363 cri.go:89] found id: ""
	I1210 06:40:54.102954  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.102961  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:54.102966  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:54.103030  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:54.127504  836363 cri.go:89] found id: ""
	I1210 06:40:54.127543  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.127556  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:54.127561  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:54.127619  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:54.156807  836363 cri.go:89] found id: ""
	I1210 06:40:54.156822  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.156829  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:54.156833  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:54.156896  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:54.181320  836363 cri.go:89] found id: ""
	I1210 06:40:54.181335  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.181342  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:54.181348  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:54.181406  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:54.205593  836363 cri.go:89] found id: ""
	I1210 06:40:54.205605  836363 logs.go:282] 0 containers: []
	W1210 06:40:54.205612  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:54.205620  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:54.205631  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:54.222285  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:54.222301  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:54.288392  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:54.279932   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.280608   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282205   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282786   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.284468   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:54.279932   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.280608   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282205   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.282786   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:54.284468   14469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:54.288402  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:54.288423  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:54.357504  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:54.357523  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:54.391376  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:54.391394  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:56.947968  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:56.957769  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:56.957833  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:56.981684  836363 cri.go:89] found id: ""
	I1210 06:40:56.981698  836363 logs.go:282] 0 containers: []
	W1210 06:40:56.981704  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:56.981709  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:56.981773  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:57.008321  836363 cri.go:89] found id: ""
	I1210 06:40:57.008336  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.008344  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:57.008348  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:57.008409  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:57.033150  836363 cri.go:89] found id: ""
	I1210 06:40:57.033164  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.033171  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:57.033175  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:57.033234  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:57.061083  836363 cri.go:89] found id: ""
	I1210 06:40:57.061096  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.061103  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:57.061108  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:57.061167  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:57.084352  836363 cri.go:89] found id: ""
	I1210 06:40:57.084366  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.084372  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:57.084377  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:57.084432  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:57.108194  836363 cri.go:89] found id: ""
	I1210 06:40:57.108225  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.108239  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:40:57.108244  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:57.108315  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:57.136912  836363 cri.go:89] found id: ""
	I1210 06:40:57.136926  836363 logs.go:282] 0 containers: []
	W1210 06:40:57.136935  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:57.136942  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:57.136953  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:57.198446  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:40:57.198510  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:57.225389  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:57.225406  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:57.283570  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:57.283589  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:57.301703  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:57.301727  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:57.380612  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:40:57.372663   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.373165   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.374676   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.375061   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.376625   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:40:57.372663   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.373165   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.374676   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.375061   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:40:57.376625   14596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:59.880952  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:59.891486  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:59.891569  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:59.915927  836363 cri.go:89] found id: ""
	I1210 06:40:59.915941  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.915947  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:40:59.915953  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:59.916013  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:59.944178  836363 cri.go:89] found id: ""
	I1210 06:40:59.944192  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.944200  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:40:59.944205  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:59.944264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:59.969112  836363 cri.go:89] found id: ""
	I1210 06:40:59.969126  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.969133  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:40:59.969138  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:59.969201  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:59.994908  836363 cri.go:89] found id: ""
	I1210 06:40:59.994922  836363 logs.go:282] 0 containers: []
	W1210 06:40:59.994929  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:40:59.994934  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:59.994991  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:00.092005  836363 cri.go:89] found id: ""
	I1210 06:41:00.092022  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.092030  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:00.092036  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:00.092110  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:00.176527  836363 cri.go:89] found id: ""
	I1210 06:41:00.176549  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.176557  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:00.176563  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:00.176628  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:00.227381  836363 cri.go:89] found id: ""
	I1210 06:41:00.227398  836363 logs.go:282] 0 containers: []
	W1210 06:41:00.227406  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:00.227414  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:00.227427  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:00.330232  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:00.330255  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:00.363949  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:00.363967  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:00.445659  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:00.436629   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.437562   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439318   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439706   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.441418   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:00.436629   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.437562   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439318   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.439706   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:00.441418   14686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:00.445669  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:00.445681  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:00.509415  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:00.509440  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:03.043380  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:03.053715  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:03.053796  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:03.079434  836363 cri.go:89] found id: ""
	I1210 06:41:03.079449  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.079456  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:03.079462  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:03.079520  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:03.112748  836363 cri.go:89] found id: ""
	I1210 06:41:03.112761  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.112768  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:03.112773  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:03.112831  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:03.137303  836363 cri.go:89] found id: ""
	I1210 06:41:03.137317  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.137324  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:03.137329  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:03.137390  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:03.162303  836363 cri.go:89] found id: ""
	I1210 06:41:03.162317  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.162324  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:03.162329  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:03.162387  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:03.186423  836363 cri.go:89] found id: ""
	I1210 06:41:03.186438  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.186445  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:03.186449  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:03.186542  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:03.215070  836363 cri.go:89] found id: ""
	I1210 06:41:03.215084  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.215091  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:03.215096  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:03.215154  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:03.238820  836363 cri.go:89] found id: ""
	I1210 06:41:03.238834  836363 logs.go:282] 0 containers: []
	W1210 06:41:03.238841  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:03.238850  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:03.238861  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:03.293835  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:03.293853  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:03.312548  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:03.312565  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:03.381504  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:03.373169   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.373896   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.375591   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.376023   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.377455   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:03.373169   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.373896   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.375591   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.376023   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:03.377455   14792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:03.381514  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:03.381524  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:03.444806  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:03.444826  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:05.972428  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:05.982168  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:05.982226  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:06.011191  836363 cri.go:89] found id: ""
	I1210 06:41:06.011206  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.011214  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:06.011220  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:06.011295  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:06.038921  836363 cri.go:89] found id: ""
	I1210 06:41:06.038937  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.038944  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:06.038949  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:06.039011  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:06.063412  836363 cri.go:89] found id: ""
	I1210 06:41:06.063426  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.063433  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:06.063438  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:06.063497  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:06.087777  836363 cri.go:89] found id: ""
	I1210 06:41:06.087800  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.087807  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:06.087812  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:06.087881  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:06.112794  836363 cri.go:89] found id: ""
	I1210 06:41:06.112809  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.112815  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:06.112821  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:06.112877  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:06.137620  836363 cri.go:89] found id: ""
	I1210 06:41:06.137634  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.137641  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:06.137645  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:06.137702  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:06.164245  836363 cri.go:89] found id: ""
	I1210 06:41:06.164259  836363 logs.go:282] 0 containers: []
	W1210 06:41:06.164266  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:06.164274  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:06.164331  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:06.219975  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:06.219994  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:06.236571  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:06.236596  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:06.309920  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:06.294848   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.295676   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.297523   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.298004   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.299656   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:06.294848   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.295676   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.297523   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.298004   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:06.299656   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:06.309934  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:06.309944  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:06.383624  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:06.383646  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:08.911581  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:08.923631  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:08.923713  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:08.950073  836363 cri.go:89] found id: ""
	I1210 06:41:08.950087  836363 logs.go:282] 0 containers: []
	W1210 06:41:08.950094  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:08.950100  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:08.950157  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:08.976323  836363 cri.go:89] found id: ""
	I1210 06:41:08.976337  836363 logs.go:282] 0 containers: []
	W1210 06:41:08.976345  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:08.976349  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:08.976409  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:09.001975  836363 cri.go:89] found id: ""
	I1210 06:41:09.001991  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.001998  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:09.002004  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:09.002076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:09.027223  836363 cri.go:89] found id: ""
	I1210 06:41:09.027237  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.027250  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:09.027256  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:09.027314  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:09.051870  836363 cri.go:89] found id: ""
	I1210 06:41:09.051884  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.051890  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:09.051896  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:09.051955  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:09.075643  836363 cri.go:89] found id: ""
	I1210 06:41:09.075658  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.075678  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:09.075684  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:09.075740  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:09.100390  836363 cri.go:89] found id: ""
	I1210 06:41:09.100404  836363 logs.go:282] 0 containers: []
	W1210 06:41:09.100411  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:09.100419  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:09.100430  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:09.164481  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:09.156151   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.156953   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.158536   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.159001   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.160652   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:09.156151   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.156953   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.158536   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.159001   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:09.160652   14991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:09.164492  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:09.164502  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:09.228784  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:09.228804  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:09.256846  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:09.256863  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:09.312682  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:09.312702  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:11.842135  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:11.852673  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:11.852735  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:11.877129  836363 cri.go:89] found id: ""
	I1210 06:41:11.877144  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.877151  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:11.877156  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:11.877215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:11.902763  836363 cri.go:89] found id: ""
	I1210 06:41:11.902777  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.902784  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:11.902789  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:11.902863  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:11.927125  836363 cri.go:89] found id: ""
	I1210 06:41:11.927139  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.927146  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:11.927150  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:11.927206  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:11.966123  836363 cri.go:89] found id: ""
	I1210 06:41:11.966137  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.966144  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:11.966149  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:11.966205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:11.990046  836363 cri.go:89] found id: ""
	I1210 06:41:11.990059  836363 logs.go:282] 0 containers: []
	W1210 06:41:11.990067  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:11.990072  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:11.990132  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:12.015096  836363 cri.go:89] found id: ""
	I1210 06:41:12.015111  836363 logs.go:282] 0 containers: []
	W1210 06:41:12.015118  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:12.015124  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:12.015185  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:12.040883  836363 cri.go:89] found id: ""
	I1210 06:41:12.040897  836363 logs.go:282] 0 containers: []
	W1210 06:41:12.040905  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:12.040912  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:12.040923  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:12.067975  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:12.067991  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:12.124161  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:12.124181  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:12.141074  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:12.141090  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:12.204309  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:12.196503   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.197043   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.198523   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.199003   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.200458   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:12.196503   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.197043   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.198523   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.199003   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:12.200458   15112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:12.204325  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:12.204336  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:14.770164  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:14.781008  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:14.781070  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:14.810029  836363 cri.go:89] found id: ""
	I1210 06:41:14.810042  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.810051  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:14.810056  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:14.810115  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:14.834988  836363 cri.go:89] found id: ""
	I1210 06:41:14.835002  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.835009  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:14.835015  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:14.835076  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:14.859273  836363 cri.go:89] found id: ""
	I1210 06:41:14.859287  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.859294  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:14.859299  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:14.859358  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:14.884024  836363 cri.go:89] found id: ""
	I1210 06:41:14.884038  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.884045  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:14.884051  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:14.884111  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:14.907573  836363 cri.go:89] found id: ""
	I1210 06:41:14.907587  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.907596  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:14.907601  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:14.907660  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:14.932198  836363 cri.go:89] found id: ""
	I1210 06:41:14.932212  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.932219  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:14.932225  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:14.932285  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:14.957047  836363 cri.go:89] found id: ""
	I1210 06:41:14.957062  836363 logs.go:282] 0 containers: []
	W1210 06:41:14.957069  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:14.957077  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:14.957087  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:15.015819  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:15.015841  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:15.035356  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:15.035387  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:15.111422  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:15.102537   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.103449   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105131   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105642   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.107373   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:15.102537   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.103449   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105131   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.105642   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:15.107373   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:15.111434  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:15.111446  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:15.173911  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:15.173930  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:17.707403  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:17.717581  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:17.717645  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:17.741545  836363 cri.go:89] found id: ""
	I1210 06:41:17.741559  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.741566  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:17.741572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:17.741630  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:17.766133  836363 cri.go:89] found id: ""
	I1210 06:41:17.766147  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.766154  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:17.766159  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:17.766213  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:17.790714  836363 cri.go:89] found id: ""
	I1210 06:41:17.790728  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.790735  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:17.790740  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:17.790795  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:17.814639  836363 cri.go:89] found id: ""
	I1210 06:41:17.814653  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.814660  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:17.814666  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:17.814721  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:17.839269  836363 cri.go:89] found id: ""
	I1210 06:41:17.839283  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.839290  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:17.839295  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:17.839353  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:17.864188  836363 cri.go:89] found id: ""
	I1210 06:41:17.864202  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.864209  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:17.864214  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:17.864273  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:17.889103  836363 cri.go:89] found id: ""
	I1210 06:41:17.889117  836363 logs.go:282] 0 containers: []
	W1210 06:41:17.889124  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:17.889132  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:17.889142  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:17.945534  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:17.945553  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:17.962119  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:17.962136  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:18.031737  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:18.022190   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.023153   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.024970   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.025609   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.027479   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:18.022190   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.023153   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.024970   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.025609   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:18.027479   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:18.031747  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:18.031758  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:18.095025  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:18.095045  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:20.626616  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:20.637064  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:20.637135  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:20.661085  836363 cri.go:89] found id: ""
	I1210 06:41:20.661098  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.661105  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:20.661110  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:20.661170  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:20.686407  836363 cri.go:89] found id: ""
	I1210 06:41:20.686420  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.686427  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:20.686432  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:20.686519  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:20.710905  836363 cri.go:89] found id: ""
	I1210 06:41:20.710919  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.710926  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:20.710931  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:20.710989  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:20.735241  836363 cri.go:89] found id: ""
	I1210 06:41:20.735255  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.735262  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:20.735268  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:20.735326  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:20.762996  836363 cri.go:89] found id: ""
	I1210 06:41:20.763010  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.763017  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:20.763022  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:20.763080  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:20.793084  836363 cri.go:89] found id: ""
	I1210 06:41:20.793098  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.793105  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:20.793111  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:20.793167  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:20.821259  836363 cri.go:89] found id: ""
	I1210 06:41:20.821274  836363 logs.go:282] 0 containers: []
	W1210 06:41:20.821281  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:20.821289  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:20.821300  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:20.876655  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:20.876676  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:20.894043  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:20.894060  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:20.967195  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:20.958394   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.959075   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.960382   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.961013   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.962652   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:20.958394   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.959075   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.960382   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.961013   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:20.962652   15411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:20.967206  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:20.967217  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:21.028930  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:21.028949  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:23.559672  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:23.572318  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:23.572395  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:23.603800  836363 cri.go:89] found id: ""
	I1210 06:41:23.603814  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.603821  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:23.603827  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:23.603900  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:23.634190  836363 cri.go:89] found id: ""
	I1210 06:41:23.634205  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.634212  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:23.634217  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:23.634277  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:23.664876  836363 cri.go:89] found id: ""
	I1210 06:41:23.664890  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.664898  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:23.664904  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:23.664974  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:23.693167  836363 cri.go:89] found id: ""
	I1210 06:41:23.693182  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.693189  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:23.693196  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:23.693264  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:23.719371  836363 cri.go:89] found id: ""
	I1210 06:41:23.719385  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.719393  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:23.719398  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:23.719460  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:23.745307  836363 cri.go:89] found id: ""
	I1210 06:41:23.745321  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.745328  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:23.745334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:23.745399  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:23.773016  836363 cri.go:89] found id: ""
	I1210 06:41:23.773031  836363 logs.go:282] 0 containers: []
	W1210 06:41:23.773038  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:23.773046  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:23.773056  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:23.829249  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:23.829268  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:23.846743  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:23.846761  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:23.915363  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:23.907095   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.907839   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.909482   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.910101   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.911295   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:23.907095   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.907839   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.909482   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.910101   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:23.911295   15514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:23.915374  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:23.915385  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:23.977818  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:23.977838  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:26.512080  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:26.522967  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:26.523031  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:26.556941  836363 cri.go:89] found id: ""
	I1210 06:41:26.556955  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.556962  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:26.556967  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:26.557028  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:26.583709  836363 cri.go:89] found id: ""
	I1210 06:41:26.583723  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.583731  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:26.583737  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:26.583794  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:26.620398  836363 cri.go:89] found id: ""
	I1210 06:41:26.620411  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.620418  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:26.620424  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:26.620488  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:26.645205  836363 cri.go:89] found id: ""
	I1210 06:41:26.645220  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.645227  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:26.645232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:26.645295  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:26.672971  836363 cri.go:89] found id: ""
	I1210 06:41:26.672985  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.672992  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:26.672996  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:26.673054  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:26.701966  836363 cri.go:89] found id: ""
	I1210 06:41:26.701980  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.701987  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:26.701993  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:26.702051  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:26.726241  836363 cri.go:89] found id: ""
	I1210 06:41:26.726254  836363 logs.go:282] 0 containers: []
	W1210 06:41:26.726261  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:26.726269  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:26.726280  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:26.782519  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:26.782539  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:26.799105  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:26.799127  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:26.869131  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:26.860787   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.861476   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863184   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863795   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.865363   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:26.860787   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.861476   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863184   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.863795   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:26.865363   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:26.869141  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:26.869152  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:26.935169  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:26.935188  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:29.463208  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:29.473355  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:29.473417  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:29.497493  836363 cri.go:89] found id: ""
	I1210 06:41:29.497512  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.497519  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:29.497524  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:29.497584  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:29.525346  836363 cri.go:89] found id: ""
	I1210 06:41:29.525360  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.525366  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:29.525381  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:29.525485  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:29.553583  836363 cri.go:89] found id: ""
	I1210 06:41:29.553596  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.553604  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:29.553609  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:29.553665  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:29.587462  836363 cri.go:89] found id: ""
	I1210 06:41:29.587476  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.587483  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:29.587488  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:29.587559  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:29.625152  836363 cri.go:89] found id: ""
	I1210 06:41:29.625166  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.625173  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:29.625178  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:29.625235  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:29.649760  836363 cri.go:89] found id: ""
	I1210 06:41:29.649773  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.649781  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:29.649786  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:29.649843  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:29.674875  836363 cri.go:89] found id: ""
	I1210 06:41:29.674889  836363 logs.go:282] 0 containers: []
	W1210 06:41:29.674897  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:29.674904  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:29.674916  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:29.691346  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:29.691363  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:29.753565  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:29.745153   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.745754   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.747557   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.748093   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.749766   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:29.745153   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.745754   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.747557   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.748093   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:29.749766   15719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:29.753580  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:29.753591  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:29.815732  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:29.815751  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:29.848125  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:29.848141  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:32.408296  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:32.419204  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:32.419279  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:32.445527  836363 cri.go:89] found id: ""
	I1210 06:41:32.445542  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.445548  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:32.445553  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:32.445611  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:32.470075  836363 cri.go:89] found id: ""
	I1210 06:41:32.470088  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.470095  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:32.470108  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:32.470164  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:32.494632  836363 cri.go:89] found id: ""
	I1210 06:41:32.494647  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.494654  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:32.494658  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:32.494732  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:32.522542  836363 cri.go:89] found id: ""
	I1210 06:41:32.522555  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.522568  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:32.522574  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:32.522641  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:32.557483  836363 cri.go:89] found id: ""
	I1210 06:41:32.557498  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.557505  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:32.557511  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:32.557570  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:32.586583  836363 cri.go:89] found id: ""
	I1210 06:41:32.586598  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.586605  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:32.586611  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:32.586673  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:32.614984  836363 cri.go:89] found id: ""
	I1210 06:41:32.614997  836363 logs.go:282] 0 containers: []
	W1210 06:41:32.615004  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:32.615012  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:32.615023  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:32.677103  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:32.669262   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.669805   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671272   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671743   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.673216   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:32.669262   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.669805   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671272   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.671743   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:32.673216   15821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:32.677113  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:32.677123  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:32.738003  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:32.738022  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:32.765472  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:32.765488  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:32.822384  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:32.822406  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:35.339259  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:35.349700  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:35.349758  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:35.375337  836363 cri.go:89] found id: ""
	I1210 06:41:35.375359  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.375366  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:35.375371  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:35.375449  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:35.399613  836363 cri.go:89] found id: ""
	I1210 06:41:35.399627  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.399634  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:35.399639  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:35.399696  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:35.423561  836363 cri.go:89] found id: ""
	I1210 06:41:35.423575  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.423582  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:35.423588  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:35.423650  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:35.448165  836363 cri.go:89] found id: ""
	I1210 06:41:35.448179  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.448186  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:35.448198  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:35.448256  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:35.476970  836363 cri.go:89] found id: ""
	I1210 06:41:35.476984  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.476992  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:35.476997  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:35.477062  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:35.500993  836363 cri.go:89] found id: ""
	I1210 06:41:35.501007  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.501024  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:35.501029  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:35.501087  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:35.530273  836363 cri.go:89] found id: ""
	I1210 06:41:35.530294  836363 logs.go:282] 0 containers: []
	W1210 06:41:35.530301  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:35.530309  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:35.530320  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:35.588229  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:35.588248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:35.608295  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:35.608311  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:35.673227  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:35.664447   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.665198   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667057   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667693   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.669348   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:35.664447   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.665198   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667057   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.667693   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:35.669348   15930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:35.673237  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:35.673248  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:35.735230  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:35.735250  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:38.262657  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:38.273339  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:38.273403  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:38.298561  836363 cri.go:89] found id: ""
	I1210 06:41:38.298576  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.298583  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:38.298588  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:38.298647  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:38.323273  836363 cri.go:89] found id: ""
	I1210 06:41:38.323294  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.323301  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:38.323306  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:38.323369  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:38.348694  836363 cri.go:89] found id: ""
	I1210 06:41:38.348709  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.348716  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:38.348721  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:38.348777  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:38.374030  836363 cri.go:89] found id: ""
	I1210 06:41:38.374044  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.374052  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:38.374057  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:38.374116  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:38.399116  836363 cri.go:89] found id: ""
	I1210 06:41:38.399130  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.399137  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:38.399142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:38.399205  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:38.431922  836363 cri.go:89] found id: ""
	I1210 06:41:38.431936  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.431943  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:38.431954  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:38.432015  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:38.456101  836363 cri.go:89] found id: ""
	I1210 06:41:38.456115  836363 logs.go:282] 0 containers: []
	W1210 06:41:38.456122  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:38.456130  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:38.456140  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:38.511923  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:38.511943  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:38.528342  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:38.528360  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:38.608737  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:38.599653   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.600438   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.601979   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.602518   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.604301   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:38.599653   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.600438   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.601979   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.602518   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:38.604301   16028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:38.608759  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:38.608770  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:38.671052  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:38.671073  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:41.199012  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:41.208683  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:41.208748  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:41.232632  836363 cri.go:89] found id: ""
	I1210 06:41:41.232645  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.232652  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:41.232657  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:41.232718  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:41.255309  836363 cri.go:89] found id: ""
	I1210 06:41:41.255322  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.255329  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:41.255334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:41.255388  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:41.279539  836363 cri.go:89] found id: ""
	I1210 06:41:41.279553  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.279560  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:41.279565  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:41.279636  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:41.306855  836363 cri.go:89] found id: ""
	I1210 06:41:41.306870  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.306877  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:41.306882  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:41.306943  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:41.331914  836363 cri.go:89] found id: ""
	I1210 06:41:41.331927  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.331933  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:41.331938  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:41.331998  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:41.355926  836363 cri.go:89] found id: ""
	I1210 06:41:41.355940  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.355947  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:41.355952  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:41.356022  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:41.380191  836363 cri.go:89] found id: ""
	I1210 06:41:41.380205  836363 logs.go:282] 0 containers: []
	W1210 06:41:41.380213  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:41.380221  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:41.380237  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:41.396613  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:41.396631  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:41.460969  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:41.452836   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.453418   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455027   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455521   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.457097   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:41.452836   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.453418   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455027   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.455521   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:41.457097   16134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:41.460979  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:41.460991  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:41.522046  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:41.522066  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:41.556015  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:41.556032  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:44.133635  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:44.143661  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:44.143725  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:44.170247  836363 cri.go:89] found id: ""
	I1210 06:41:44.170262  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.170269  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:44.170274  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:44.170341  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:44.195020  836363 cri.go:89] found id: ""
	I1210 06:41:44.195034  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.195040  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:44.195045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:44.195101  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:44.219352  836363 cri.go:89] found id: ""
	I1210 06:41:44.219366  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.219373  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:44.219378  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:44.219435  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:44.247508  836363 cri.go:89] found id: ""
	I1210 06:41:44.247522  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.247529  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:44.247534  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:44.247593  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:44.271983  836363 cri.go:89] found id: ""
	I1210 06:41:44.271997  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.272004  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:44.272009  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:44.272066  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:44.295908  836363 cri.go:89] found id: ""
	I1210 06:41:44.295922  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.295928  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:44.295934  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:44.295993  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:44.324246  836363 cri.go:89] found id: ""
	I1210 06:41:44.324260  836363 logs.go:282] 0 containers: []
	W1210 06:41:44.324266  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:44.324275  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:44.324285  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:44.387028  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:44.387048  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:44.415316  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:44.415332  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:44.471125  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:44.471146  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:44.487999  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:44.488017  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:44.555772  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:44.542191   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.542827   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545275   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545612   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.547112   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:44.542191   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.542827   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545275   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.545612   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:44.547112   16253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:47.056814  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:47.066882  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:47.066943  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:47.091827  836363 cri.go:89] found id: ""
	I1210 06:41:47.091841  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.091848  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:47.091853  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:47.091910  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:47.115556  836363 cri.go:89] found id: ""
	I1210 06:41:47.115571  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.115578  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:47.115583  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:47.115640  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:47.140381  836363 cri.go:89] found id: ""
	I1210 06:41:47.140395  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.140402  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:47.140407  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:47.140466  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:47.164584  836363 cri.go:89] found id: ""
	I1210 06:41:47.164599  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.164606  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:47.164611  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:47.164669  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:47.188952  836363 cri.go:89] found id: ""
	I1210 06:41:47.188966  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.188973  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:47.188978  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:47.189036  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:47.215501  836363 cri.go:89] found id: ""
	I1210 06:41:47.215515  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.215522  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:47.215528  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:47.215594  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:47.248270  836363 cri.go:89] found id: ""
	I1210 06:41:47.248284  836363 logs.go:282] 0 containers: []
	W1210 06:41:47.248291  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:47.248301  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:47.248312  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:47.264763  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:47.264780  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:47.328736  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:47.319556   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.320385   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.321992   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.322636   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.324430   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:47.319556   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.320385   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.321992   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.322636   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:47.324430   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:47.328762  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:47.328773  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:47.391108  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:47.391129  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:47.421573  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:47.421590  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:49.978044  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:49.988396  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:49.988461  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:50.019406  836363 cri.go:89] found id: ""
	I1210 06:41:50.019422  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.019430  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:50.019436  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:50.019525  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:50.046394  836363 cri.go:89] found id: ""
	I1210 06:41:50.046409  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.046416  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:50.046421  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:50.046513  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:50.073199  836363 cri.go:89] found id: ""
	I1210 06:41:50.073213  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.073220  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:50.073225  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:50.073287  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:50.099702  836363 cri.go:89] found id: ""
	I1210 06:41:50.099716  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.099722  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:50.099728  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:50.099787  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:50.128872  836363 cri.go:89] found id: ""
	I1210 06:41:50.128886  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.128893  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:50.128898  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:50.128956  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:50.153319  836363 cri.go:89] found id: ""
	I1210 06:41:50.153333  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.153340  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:50.153346  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:50.153404  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:50.180949  836363 cri.go:89] found id: ""
	I1210 06:41:50.180962  836363 logs.go:282] 0 containers: []
	W1210 06:41:50.180968  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:50.180976  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:50.180986  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:50.242900  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:50.242922  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:50.273618  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:50.273634  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:50.328466  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:50.328485  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:50.344888  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:50.344905  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:50.410799  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:50.402194   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.403126   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.404850   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.405142   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.406780   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:50.402194   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.403126   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.404850   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.405142   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:50.406780   16463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:52.911683  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:52.922118  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:52.922186  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:52.947907  836363 cri.go:89] found id: ""
	I1210 06:41:52.947922  836363 logs.go:282] 0 containers: []
	W1210 06:41:52.947930  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:52.947935  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:52.948002  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:52.974796  836363 cri.go:89] found id: ""
	I1210 06:41:52.974812  836363 logs.go:282] 0 containers: []
	W1210 06:41:52.974820  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:52.974826  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:52.974885  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:53.005919  836363 cri.go:89] found id: ""
	I1210 06:41:53.005935  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.005942  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:53.005950  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:53.006027  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:53.033320  836363 cri.go:89] found id: ""
	I1210 06:41:53.033333  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.033340  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:53.033345  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:53.033405  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:53.061819  836363 cri.go:89] found id: ""
	I1210 06:41:53.061834  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.061851  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:53.061857  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:53.061924  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:53.086290  836363 cri.go:89] found id: ""
	I1210 06:41:53.086304  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.086311  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:53.086316  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:53.086374  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:53.111667  836363 cri.go:89] found id: ""
	I1210 06:41:53.111681  836363 logs.go:282] 0 containers: []
	W1210 06:41:53.111697  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:53.111706  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:53.111716  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:53.168392  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:53.168412  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:53.185807  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:53.185823  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:53.254387  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:53.246258   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.246805   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248540   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248996   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.250535   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:53.246258   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.246805   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248540   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.248996   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:53.250535   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:53.254397  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:53.254408  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:53.319043  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:53.319063  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:55.851295  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:55.861334  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:55.861402  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:55.886929  836363 cri.go:89] found id: ""
	I1210 06:41:55.886949  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.886957  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:55.886962  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:55.887020  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:55.915116  836363 cri.go:89] found id: ""
	I1210 06:41:55.915130  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.915138  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:55.915142  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:55.915200  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:55.939013  836363 cri.go:89] found id: ""
	I1210 06:41:55.939033  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.939040  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:55.939045  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:55.939101  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:55.964369  836363 cri.go:89] found id: ""
	I1210 06:41:55.964383  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.964390  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:55.964395  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:55.964455  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:55.989465  836363 cri.go:89] found id: ""
	I1210 06:41:55.989478  836363 logs.go:282] 0 containers: []
	W1210 06:41:55.989485  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:55.989491  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:55.989557  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:56.014203  836363 cri.go:89] found id: ""
	I1210 06:41:56.014218  836363 logs.go:282] 0 containers: []
	W1210 06:41:56.014225  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:56.014230  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:56.014336  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:56.043892  836363 cri.go:89] found id: ""
	I1210 06:41:56.043906  836363 logs.go:282] 0 containers: []
	W1210 06:41:56.043916  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:56.043925  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:56.043936  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:56.112761  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:56.104681   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.105226   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.106816   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.107362   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.108899   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:56.104681   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.105226   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.106816   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.107362   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:56.108899   16656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:56.112770  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:56.112781  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:56.174642  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:56.174662  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:56.202947  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:56.202963  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:56.259062  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:56.259082  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:58.776033  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:58.786675  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:58.786737  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:58.822543  836363 cri.go:89] found id: ""
	I1210 06:41:58.822557  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.822563  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:41:58.822572  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:58.822634  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:58.848835  836363 cri.go:89] found id: ""
	I1210 06:41:58.848850  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.848857  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:41:58.848862  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:58.848919  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:58.876530  836363 cri.go:89] found id: ""
	I1210 06:41:58.876544  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.876551  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:41:58.876556  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:58.876615  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:58.901700  836363 cri.go:89] found id: ""
	I1210 06:41:58.901714  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.901728  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:41:58.901733  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:58.901791  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:58.928495  836363 cri.go:89] found id: ""
	I1210 06:41:58.928509  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.928515  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:58.928520  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:58.928577  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:58.952415  836363 cri.go:89] found id: ""
	I1210 06:41:58.952428  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.952435  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:41:58.952440  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:58.952496  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:58.981756  836363 cri.go:89] found id: ""
	I1210 06:41:58.981771  836363 logs.go:282] 0 containers: []
	W1210 06:41:58.981788  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:58.981797  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:58.981809  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:59.049361  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:41:59.041702   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.042693   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.043691   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.044312   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.045540   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:41:59.041702   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.042693   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.043691   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.044312   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:41:59.045540   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:59.049372  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:59.049382  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:59.111079  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:41:59.111098  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:59.141459  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:59.141474  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:59.199670  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:59.199691  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:01.716854  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:01.728404  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:01.728475  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:01.756029  836363 cri.go:89] found id: ""
	I1210 06:42:01.756042  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.756049  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:01.756054  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:01.756109  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:01.780969  836363 cri.go:89] found id: ""
	I1210 06:42:01.780983  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.780990  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:01.780995  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:01.781055  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:01.820198  836363 cri.go:89] found id: ""
	I1210 06:42:01.820212  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.820219  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:01.820224  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:01.820284  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:01.848531  836363 cri.go:89] found id: ""
	I1210 06:42:01.848546  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.848553  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:01.848558  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:01.848617  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:01.878420  836363 cri.go:89] found id: ""
	I1210 06:42:01.878433  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.878441  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:01.878448  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:01.878534  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:01.905311  836363 cri.go:89] found id: ""
	I1210 06:42:01.905325  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.905344  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:01.905350  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:01.905421  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:01.929912  836363 cri.go:89] found id: ""
	I1210 06:42:01.929926  836363 logs.go:282] 0 containers: []
	W1210 06:42:01.929944  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:01.929953  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:01.929963  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:01.985928  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:01.985948  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:02.003638  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:02.003657  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:02.075789  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:02.068334   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.068871   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070020   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070533   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.071984   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:02.068334   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.068871   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070020   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.070533   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:02.071984   16868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:02.075800  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:02.075810  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:02.136779  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:02.136798  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:04.664122  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:04.675095  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:04.675159  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:04.699777  836363 cri.go:89] found id: ""
	I1210 06:42:04.699800  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.699808  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:04.699814  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:04.699911  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:04.724439  836363 cri.go:89] found id: ""
	I1210 06:42:04.724461  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.724468  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:04.724473  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:04.724538  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:04.750165  836363 cri.go:89] found id: ""
	I1210 06:42:04.750179  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.750187  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:04.750192  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:04.750260  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:04.775655  836363 cri.go:89] found id: ""
	I1210 06:42:04.775669  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.775676  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:04.775681  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:04.775740  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:04.805746  836363 cri.go:89] found id: ""
	I1210 06:42:04.805759  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.805776  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:04.805782  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:04.805849  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:04.836239  836363 cri.go:89] found id: ""
	I1210 06:42:04.836261  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.836269  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:04.836275  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:04.836344  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:04.862854  836363 cri.go:89] found id: ""
	I1210 06:42:04.862868  836363 logs.go:282] 0 containers: []
	W1210 06:42:04.862875  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:04.862883  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:04.862893  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:04.922415  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:04.922435  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:04.939187  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:04.939203  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:05.006750  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:04.996413   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.996887   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998439   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998946   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:05.000828   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:04.996413   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.996887   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998439   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:04.998946   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:05.000828   16974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:05.006762  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:05.006773  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:05.070511  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:05.070533  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:07.606355  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:07.617096  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:07.617156  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:07.642031  836363 cri.go:89] found id: ""
	I1210 06:42:07.642047  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.642054  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:42:07.642060  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:07.642117  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:07.670075  836363 cri.go:89] found id: ""
	I1210 06:42:07.670089  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.670107  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:42:07.670114  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:07.670174  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:07.695503  836363 cri.go:89] found id: ""
	I1210 06:42:07.695517  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.695534  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:42:07.695539  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:07.695613  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:07.719792  836363 cri.go:89] found id: ""
	I1210 06:42:07.719805  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.719813  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:42:07.719818  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:07.719875  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:07.742885  836363 cri.go:89] found id: ""
	I1210 06:42:07.742899  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.742906  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:07.742911  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:07.742972  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:07.766658  836363 cri.go:89] found id: ""
	I1210 06:42:07.766672  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.766679  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:42:07.766684  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:07.766742  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:07.790890  836363 cri.go:89] found id: ""
	I1210 06:42:07.790917  836363 logs.go:282] 0 containers: []
	W1210 06:42:07.790924  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:07.790932  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:42:07.790943  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:07.832030  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:07.832053  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:07.897794  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:07.897815  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:07.914747  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:07.914765  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:07.985400  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:42:07.977663   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.978174   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.979730   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.980136   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.981623   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:42:07.977663   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.978174   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.979730   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.980136   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:42:07.981623   17088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:07.985411  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:07.985422  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:10.549627  836363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:10.559818  836363 kubeadm.go:602] duration metric: took 4m3.540459063s to restartPrimaryControlPlane
	W1210 06:42:10.559885  836363 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:42:10.559961  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:42:10.971123  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:10.985022  836363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:42:10.992941  836363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:42:10.992994  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:42:11.001748  836363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:42:11.001760  836363 kubeadm.go:158] found existing configuration files:
	
	I1210 06:42:11.001824  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:42:11.011668  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:42:11.011736  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:42:11.019850  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:42:11.027722  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:42:11.027783  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:42:11.035605  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:42:11.043216  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:42:11.043273  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:42:11.050854  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:42:11.058765  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:42:11.058844  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:42:11.066934  836363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:42:11.105523  836363 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:42:11.105575  836363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:42:11.188151  836363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:42:11.188218  836363 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:42:11.188255  836363 kubeadm.go:319] OS: Linux
	I1210 06:42:11.188304  836363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:42:11.188354  836363 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:42:11.188398  836363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:42:11.188448  836363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:42:11.188493  836363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:42:11.188543  836363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:42:11.188590  836363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:42:11.188634  836363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:42:11.188683  836363 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:42:11.250124  836363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:42:11.250230  836363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:42:11.250322  836363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:42:11.255308  836363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:42:11.258775  836363 out.go:252]   - Generating certificates and keys ...
	I1210 06:42:11.258873  836363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:42:11.258950  836363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:42:11.259045  836363 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:42:11.259113  836363 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:42:11.259184  836363 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:42:11.259237  836363 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:42:11.259299  836363 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:42:11.259360  836363 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:42:11.259435  836363 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:42:11.259512  836363 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:42:11.259731  836363 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:42:11.259789  836363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:42:12.423232  836363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:42:12.577934  836363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:42:12.783953  836363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:42:13.093269  836363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:42:13.330460  836363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:42:13.331164  836363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:42:13.333749  836363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:42:13.336840  836363 out.go:252]   - Booting up control plane ...
	I1210 06:42:13.336937  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:42:13.337013  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:42:13.337083  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:42:13.358981  836363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:42:13.359103  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:42:13.368350  836363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:42:13.369623  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:42:13.370235  836363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:42:13.505873  836363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:42:13.506077  836363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:46:13.506731  836363 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00070392s
	I1210 06:46:13.506763  836363 kubeadm.go:319] 
	I1210 06:46:13.506850  836363 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:46:13.506894  836363 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:46:13.506999  836363 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:46:13.507005  836363 kubeadm.go:319] 
	I1210 06:46:13.507125  836363 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:46:13.507158  836363 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:46:13.507196  836363 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:46:13.507200  836363 kubeadm.go:319] 
	I1210 06:46:13.511687  836363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:46:13.512136  836363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:46:13.512245  836363 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:46:13.512495  836363 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:46:13.512501  836363 kubeadm.go:319] 
	I1210 06:46:13.512574  836363 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:46:13.512709  836363 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00070392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:46:13.512792  836363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:46:13.924248  836363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:46:13.937517  836363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:46:13.937579  836363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:46:13.945462  836363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:46:13.945471  836363 kubeadm.go:158] found existing configuration files:
	
	I1210 06:46:13.945523  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:46:13.953499  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:46:13.953555  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:46:13.961232  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:46:13.969190  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:46:13.969248  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:46:13.976966  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:46:13.984824  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:46:13.984878  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:46:13.992414  836363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:46:14.002049  836363 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:46:14.002141  836363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:46:14.011865  836363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:46:14.052323  836363 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:46:14.052372  836363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:46:14.126225  836363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:46:14.126291  836363 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:46:14.126325  836363 kubeadm.go:319] OS: Linux
	I1210 06:46:14.126369  836363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:46:14.126415  836363 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:46:14.126482  836363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:46:14.126530  836363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:46:14.126577  836363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:46:14.126624  836363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:46:14.126668  836363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:46:14.126716  836363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:46:14.126761  836363 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:46:14.195770  836363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:46:14.195873  836363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:46:14.195962  836363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:46:14.202979  836363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:46:14.208298  836363 out.go:252]   - Generating certificates and keys ...
	I1210 06:46:14.208399  836363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:46:14.208478  836363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:46:14.208559  836363 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:46:14.208622  836363 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:46:14.208696  836363 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:46:14.208754  836363 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:46:14.208821  836363 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:46:14.208886  836363 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:46:14.208964  836363 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:46:14.209040  836363 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:46:14.209080  836363 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:46:14.209138  836363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:46:14.596166  836363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:46:14.891862  836363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:46:14.944957  836363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:46:15.236183  836363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:46:15.354206  836363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:46:15.354795  836363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:46:15.357335  836363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:46:15.360719  836363 out.go:252]   - Booting up control plane ...
	I1210 06:46:15.360814  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:46:15.360889  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:46:15.360954  836363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:46:15.381031  836363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:46:15.381140  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:46:15.389841  836363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:46:15.391023  836363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:46:15.391179  836363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:46:15.526794  836363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:46:15.526907  836363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:50:15.527073  836363 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000371584s
	I1210 06:50:15.527097  836363 kubeadm.go:319] 
	I1210 06:50:15.527182  836363 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:50:15.527235  836363 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:50:15.527340  836363 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:50:15.527347  836363 kubeadm.go:319] 
	I1210 06:50:15.527451  836363 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:50:15.527482  836363 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:50:15.527512  836363 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:50:15.527515  836363 kubeadm.go:319] 
	I1210 06:50:15.531196  836363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:50:15.531609  836363 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:50:15.531716  836363 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:50:15.531977  836363 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:50:15.531981  836363 kubeadm.go:319] 
	I1210 06:50:15.532049  836363 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:50:15.532106  836363 kubeadm.go:403] duration metric: took 12m8.555678628s to StartCluster
	I1210 06:50:15.532150  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:15.532210  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:15.570548  836363 cri.go:89] found id: ""
	I1210 06:50:15.570562  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.570569  836363 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:50:15.570575  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:50:15.570641  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:15.600057  836363 cri.go:89] found id: ""
	I1210 06:50:15.600071  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.600078  836363 logs.go:284] No container was found matching "etcd"
	I1210 06:50:15.600083  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:50:15.600143  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:15.630207  836363 cri.go:89] found id: ""
	I1210 06:50:15.630221  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.630228  836363 logs.go:284] No container was found matching "coredns"
	I1210 06:50:15.630232  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:15.630288  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:15.654767  836363 cri.go:89] found id: ""
	I1210 06:50:15.654781  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.654788  836363 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:50:15.654793  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:15.654853  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:15.678797  836363 cri.go:89] found id: ""
	I1210 06:50:15.678823  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.678830  836363 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:15.678835  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:15.678895  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:15.707130  836363 cri.go:89] found id: ""
	I1210 06:50:15.707144  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.707151  836363 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:50:15.707157  836363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:15.707215  836363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:15.732682  836363 cri.go:89] found id: ""
	I1210 06:50:15.732696  836363 logs.go:282] 0 containers: []
	W1210 06:50:15.732703  836363 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:15.732711  836363 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:15.732725  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:15.749626  836363 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:15.749643  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:15.820658  836363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:50:15.811023   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.812026   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.813622   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.814217   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.815852   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:50:15.811023   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.812026   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.813622   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.814217   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:15.815852   20872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:15.820670  836363 logs.go:123] Gathering logs for containerd ...
	I1210 06:50:15.820682  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:50:15.883000  836363 logs.go:123] Gathering logs for container status ...
	I1210 06:50:15.883021  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:15.913106  836363 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:15.913122  836363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 06:50:15.972159  836363 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:50:15.972201  836363 out.go:285] * 
	W1210 06:50:15.972316  836363 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:15.972359  836363 out.go:285] * 
	W1210 06:50:15.974510  836363 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:50:15.979994  836363 out.go:203] 
	W1210 06:50:15.983642  836363 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000371584s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:15.983686  836363 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:50:15.983706  836363 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:50:15.987432  836363 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445107196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445121990Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445162984Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445179287Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445188756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445200998Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445209959Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445223464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445238939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445267518Z" level=info msg="Connect containerd service"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.445551476Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.446055950Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466617657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466678671Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466705092Z" level=info msg="Start subscribing containerd event"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.466755874Z" level=info msg="Start recovering state"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511858771Z" level=info msg="Start event monitor"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511903539Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511912844Z" level=info msg="Start streaming server"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511923740Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511932676Z" level=info msg="runtime interface starting up..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511939502Z" level=info msg="starting plugins..."
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.511951014Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:38:05 functional-534748 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 06:38:05 functional-534748 containerd[9660]: time="2025-12-10T06:38:05.523710063Z" level=info msg="containerd successfully booted in 0.098844s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:52:31.326360   22521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:31.327296   22521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:31.329103   22521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:31.329837   22521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:31.331499   22521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:52:31 up  5:34,  0 user,  load average: 0.24, 0.24, 0.43
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:52:28 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:29 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 10 06:52:29 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:29 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:29 functional-534748 kubelet[22411]: E1210 06:52:29.093484   22411 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:29 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:29 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:29 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 499.
	Dec 10 06:52:29 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:29 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:29 functional-534748 kubelet[22417]: E1210 06:52:29.854176   22417 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:29 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:29 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:30 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 500.
	Dec 10 06:52:30 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:30 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:30 functional-534748 kubelet[22437]: E1210 06:52:30.611484   22437 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:30 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:30 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:31 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 501.
	Dec 10 06:52:31 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:31 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:31 functional-534748 kubelet[22525]: E1210 06:52:31.355548   22525 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:31 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:31 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (359.931778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:50:34.385381  786751 retry.go:31] will retry after 2.784502649s: Temporary Error: Get "http://10.97.255.226": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:50:47.171062  786751 retry.go:31] will retry after 4.690431062s: Temporary Error: Get "http://10.97.255.226": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:51:01.863248  786751 retry.go:31] will retry after 5.055665529s: Temporary Error: Get "http://10.97.255.226": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:51:16.920669  786751 retry.go:31] will retry after 11.918966939s: Temporary Error: Get "http://10.97.255.226": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:51:38.841576  786751 retry.go:31] will retry after 15.617411032s: Temporary Error: Get "http://10.97.255.226": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:52:04.460682  786751 retry.go:31] will retry after 15.064482396s: Temporary Error: Get "http://10.97.255.226": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1210 06:52:35.786692  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1210 06:53:17.488945  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (298.520754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (331.531317ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-534748 image load --daemon kicbase/echo-server:functional-534748 --alsologtostderr                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image ls                                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image save kicbase/echo-server:functional-534748 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image rm kicbase/echo-server:functional-534748 --alsologtostderr                                                                              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image ls                                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image ls                                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image save --daemon kicbase/echo-server:functional-534748 --alsologtostderr                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh            │ functional-534748 ssh sudo cat /etc/test/nested/copy/786751/hosts                                                                                               │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh            │ functional-534748 ssh sudo cat /etc/ssl/certs/786751.pem                                                                                                        │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh            │ functional-534748 ssh sudo cat /usr/share/ca-certificates/786751.pem                                                                                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh            │ functional-534748 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh            │ functional-534748 ssh sudo cat /etc/ssl/certs/7867512.pem                                                                                                       │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh            │ functional-534748 ssh sudo cat /usr/share/ca-certificates/7867512.pem                                                                                           │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh            │ functional-534748 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image ls --format short --alsologtostderr                                                                                                     │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image ls --format json --alsologtostderr                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image ls --format table --alsologtostderr                                                                                                     │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image          │ functional-534748 image ls --format yaml --alsologtostderr                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh            │ functional-534748 ssh pgrep buildkitd                                                                                                                           │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ image          │ functional-534748 image build -t localhost/my-image:functional-534748 testdata/build --alsologtostderr                                                          │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:53 UTC │
	│ image          │ functional-534748 image ls                                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ update-context │ functional-534748 update-context --alsologtostderr -v=2                                                                                                         │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ update-context │ functional-534748 update-context --alsologtostderr -v=2                                                                                                         │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ update-context │ functional-534748 update-context --alsologtostderr -v=2                                                                                                         │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:52:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:52:45.807653  853756 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:52:45.807764  853756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.807774  853756 out.go:374] Setting ErrFile to fd 2...
	I1210 06:52:45.807779  853756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.808034  853756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:52:45.808382  853756 out.go:368] Setting JSON to false
	I1210 06:52:45.809206  853756 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":20090,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:52:45.809271  853756 start.go:143] virtualization:  
	I1210 06:52:45.812558  853756 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:52:45.815542  853756 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:52:45.815687  853756 notify.go:221] Checking for updates...
	I1210 06:52:45.821390  853756 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:52:45.824195  853756 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:52:45.826987  853756 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:52:45.829774  853756 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:52:45.832618  853756 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:52:45.835931  853756 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:52:45.836500  853756 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:52:45.866568  853756 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:52:45.866757  853756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:52:45.930067  853756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:45.920459297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:52:45.930177  853756 docker.go:319] overlay module found
	I1210 06:52:45.933251  853756 out.go:179] * Using the docker driver based on existing profile
	I1210 06:52:45.936100  853756 start.go:309] selected driver: docker
	I1210 06:52:45.936124  853756 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:52:45.936235  853756 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:52:45.936344  853756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:52:46.003505  853756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:45.991043175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:52:46.003976  853756 cni.go:84] Creating CNI manager for ""
	I1210 06:52:46.004045  853756 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:52:46.004093  853756 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:52:46.007190  853756 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.678917942Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.679753333Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.731849024Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\""
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.734890185Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.737074687Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.745991692Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\" returns successfully"
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.971965050Z" level=info msg="No images store for sha256:a25a8b93ed7b5587037ade52733a88ce58759ee4581473c7958c80ab2aede196"
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.974128120Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.982446308Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.982987443Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:52 functional-534748 containerd[9660]: time="2025-12-10T06:52:52.753141311Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\""
	Dec 10 06:52:52 functional-534748 containerd[9660]: time="2025-12-10T06:52:52.755492281Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:52 functional-534748 containerd[9660]: time="2025-12-10T06:52:52.757475812Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:52:52 functional-534748 containerd[9660]: time="2025-12-10T06:52:52.765806874Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\" returns successfully"
	Dec 10 06:52:53 functional-534748 containerd[9660]: time="2025-12-10T06:52:53.427294038Z" level=info msg="No images store for sha256:332c9d04efc7ec4e527924810ba65924ca4b4462da5b51e83a1db6511851030d"
	Dec 10 06:52:53 functional-534748 containerd[9660]: time="2025-12-10T06:52:53.429497083Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:53 functional-534748 containerd[9660]: time="2025-12-10T06:52:53.436839965Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:53 functional-534748 containerd[9660]: time="2025-12-10T06:52:53.437173721Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:53:01 functional-534748 containerd[9660]: time="2025-12-10T06:53:01.509547861Z" level=info msg="connecting to shim 89hjqb75nkqhk45cczu95oycn" address="unix:///run/containerd/s/acaef69a333349dda4fd5fbf2bba8f64e99e9d8307a01a93622cb6ed5319e933" namespace=k8s.io protocol=ttrpc version=3
	Dec 10 06:53:01 functional-534748 containerd[9660]: time="2025-12-10T06:53:01.580870020Z" level=info msg="shim disconnected" id=89hjqb75nkqhk45cczu95oycn namespace=k8s.io
	Dec 10 06:53:01 functional-534748 containerd[9660]: time="2025-12-10T06:53:01.580911744Z" level=info msg="cleaning up after shim disconnected" id=89hjqb75nkqhk45cczu95oycn namespace=k8s.io
	Dec 10 06:53:01 functional-534748 containerd[9660]: time="2025-12-10T06:53:01.580923756Z" level=info msg="cleaning up dead shim" id=89hjqb75nkqhk45cczu95oycn namespace=k8s.io
	Dec 10 06:53:01 functional-534748 containerd[9660]: time="2025-12-10T06:53:01.866392893Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-534748\""
	Dec 10 06:53:01 functional-534748 containerd[9660]: time="2025-12-10T06:53:01.874918230Z" level=info msg="ImageCreate event name:\"sha256:9f6f0759e744bfcad5ed76b52291b2be156d76fd27a253fc9806360f77556a11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:53:01 functional-534748 containerd[9660]: time="2025-12-10T06:53:01.875299658Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:54:25.916852   25077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:54:25.917788   25077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:54:25.919566   25077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:54:25.920074   25077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:54:25.921609   25077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:54:25 up  5:36,  0 user,  load average: 0.33, 0.37, 0.46
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:54:22 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:54:23 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 10 06:54:23 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:54:23 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:54:23 functional-534748 kubelet[24944]: E1210 06:54:23.090551   24944 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:54:23 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:54:23 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:54:23 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 10 06:54:23 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:54:23 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:54:23 functional-534748 kubelet[24949]: E1210 06:54:23.839027   24949 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:54:23 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:54:23 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:54:24 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 10 06:54:24 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:54:24 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:54:24 functional-534748 kubelet[24955]: E1210 06:54:24.589909   24955 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:54:24 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:54:24 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:54:25 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 10 06:54:25 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:54:25 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:54:25 functional-534748 kubelet[24988]: E1210 06:54:25.303349   24988 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:54:25 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:54:25 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (314.645273ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-534748 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-534748 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (58.017048ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-534748 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	        "Created": "2025-12-10T06:23:23.608302198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:23.673039154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
	        "HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
	        "LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
	        "Name": "/functional-534748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-534748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-534748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
	                "LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-534748",
	                "Source": "/var/lib/docker/volumes/functional-534748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-534748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-534748",
	                "name.minikube.sigs.k8s.io": "functional-534748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
	            "SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-534748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:67:c6:ed:32:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
	                    "EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-534748",
	                        "afb46bc1850e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 2 (299.371809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount2 --alsologtostderr -v=1                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ mount     │ -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount3 --alsologtostderr -v=1                            │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh       │ functional-534748 ssh findmnt -T /mount1                                                                                                                        │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh findmnt -T /mount2                                                                                                                        │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh findmnt -T /mount3                                                                                                                        │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ mount     │ -p functional-534748 --kill=true                                                                                                                                │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ start     │ -p functional-534748 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ start     │ -p functional-534748 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ start     │ -p functional-534748 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                       │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-534748 --alsologtostderr -v=1                                                                                                  │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ ssh       │ functional-534748 ssh sudo systemctl is-active docker                                                                                                           │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ ssh       │ functional-534748 ssh sudo systemctl is-active crio                                                                                                             │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │                     │
	│ image     │ functional-534748 image load --daemon kicbase/echo-server:functional-534748 --alsologtostderr                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image ls                                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image load --daemon kicbase/echo-server:functional-534748 --alsologtostderr                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image ls                                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image load --daemon kicbase/echo-server:functional-534748 --alsologtostderr                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image ls                                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image save kicbase/echo-server:functional-534748 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image rm kicbase/echo-server:functional-534748 --alsologtostderr                                                                              │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image ls                                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image ls                                                                                                                                      │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ image     │ functional-534748 image save --daemon kicbase/echo-server:functional-534748 --alsologtostderr                                                                   │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:52:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:52:45.807653  853756 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:52:45.807764  853756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.807774  853756 out.go:374] Setting ErrFile to fd 2...
	I1210 06:52:45.807779  853756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.808034  853756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:52:45.808382  853756 out.go:368] Setting JSON to false
	I1210 06:52:45.809206  853756 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":20090,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:52:45.809271  853756 start.go:143] virtualization:  
	I1210 06:52:45.812558  853756 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:52:45.815542  853756 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:52:45.815687  853756 notify.go:221] Checking for updates...
	I1210 06:52:45.821390  853756 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:52:45.824195  853756 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:52:45.826987  853756 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:52:45.829774  853756 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:52:45.832618  853756 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:52:45.835931  853756 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:52:45.836500  853756 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:52:45.866568  853756 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:52:45.866757  853756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:52:45.930067  853756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:45.920459297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:52:45.930177  853756 docker.go:319] overlay module found
	I1210 06:52:45.933251  853756 out.go:179] * Using the docker driver based on existing profile
	I1210 06:52:45.936100  853756 start.go:309] selected driver: docker
	I1210 06:52:45.936124  853756 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:52:45.936235  853756 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:52:45.936344  853756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:52:46.003505  853756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:45.991043175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:52:46.003976  853756 cni.go:84] Creating CNI manager for ""
	I1210 06:52:46.004045  853756 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:52:46.004093  853756 start.go:353] cluster config:
	{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:52:46.007190  853756 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:52:49 functional-534748 containerd[9660]: time="2025-12-10T06:52:49.631975552Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.424047016Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\""
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.426882988Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.429195631Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.439288101Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\" returns successfully"
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.664737399Z" level=info msg="No images store for sha256:a25a8b93ed7b5587037ade52733a88ce58759ee4581473c7958c80ab2aede196"
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.666862889Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.678917942Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:50 functional-534748 containerd[9660]: time="2025-12-10T06:52:50.679753333Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.731849024Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\""
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.734890185Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.737074687Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.745991692Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\" returns successfully"
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.971965050Z" level=info msg="No images store for sha256:a25a8b93ed7b5587037ade52733a88ce58759ee4581473c7958c80ab2aede196"
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.974128120Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.982446308Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:51 functional-534748 containerd[9660]: time="2025-12-10T06:52:51.982987443Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:52 functional-534748 containerd[9660]: time="2025-12-10T06:52:52.753141311Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\""
	Dec 10 06:52:52 functional-534748 containerd[9660]: time="2025-12-10T06:52:52.755492281Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:52 functional-534748 containerd[9660]: time="2025-12-10T06:52:52.757475812Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:52:52 functional-534748 containerd[9660]: time="2025-12-10T06:52:52.765806874Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-534748\" returns successfully"
	Dec 10 06:52:53 functional-534748 containerd[9660]: time="2025-12-10T06:52:53.427294038Z" level=info msg="No images store for sha256:332c9d04efc7ec4e527924810ba65924ca4b4462da5b51e83a1db6511851030d"
	Dec 10 06:52:53 functional-534748 containerd[9660]: time="2025-12-10T06:52:53.429497083Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-534748\""
	Dec 10 06:52:53 functional-534748 containerd[9660]: time="2025-12-10T06:52:53.436839965Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:52:53 functional-534748 containerd[9660]: time="2025-12-10T06:52:53.437173721Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-534748\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:52:55.055388   23904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:55.056226   23904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:55.057899   23904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:55.058231   23904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:52:55.059746   23904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 06:52:55 up  5:34,  0 user,  load average: 1.01, 0.43, 0.49
	Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:52:51 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:52 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 529.
	Dec 10 06:52:52 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:52 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:52 functional-534748 kubelet[23650]: E1210 06:52:52.348172   23650 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:52 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:52 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:53 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 530.
	Dec 10 06:52:53 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:53 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:53 functional-534748 kubelet[23720]: E1210 06:52:53.113803   23720 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:53 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:53 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:53 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 531.
	Dec 10 06:52:53 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:53 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:53 functional-534748 kubelet[23769]: E1210 06:52:53.855350   23769 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:53 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:53 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:52:54 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 532.
	Dec 10 06:52:54 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:54 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:52:54 functional-534748 kubelet[23821]: E1210 06:52:54.608614   23821 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:52:54 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:52:54 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 2 (313.01724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1210 06:50:23.745843  849393 out.go:360] Setting OutFile to fd 1 ...
I1210 06:50:23.746012  849393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:50:23.746038  849393 out.go:374] Setting ErrFile to fd 2...
I1210 06:50:23.746050  849393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:50:23.746346  849393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:50:23.746856  849393 mustload.go:66] Loading cluster: functional-534748
I1210 06:50:23.747364  849393 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:50:23.747903  849393 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:50:23.774665  849393 host.go:66] Checking if "functional-534748" exists ...
I1210 06:50:23.775004  849393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:50:23.966103  849393 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 06:50:23.951221497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:50:23.966229  849393 api_server.go:166] Checking apiserver status ...
I1210 06:50:23.966297  849393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 06:50:23.966340  849393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:50:23.997302  849393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
W1210 06:50:24.118398  849393 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 06:50:24.122533  849393 out.go:179] * The control-plane node functional-534748 apiserver is not running: (state=Stopped)
I1210 06:50:24.125434  849393 out.go:179]   To start a cluster, run: "minikube start -p functional-534748"

                                                
                                                
stdout: * The control-plane node functional-534748 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-534748"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 849392: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-534748 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-534748 apply -f testdata/testsvc.yaml: exit status 1 (133.615666ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-534748 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (125.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.97.255.226": Temporary Error: Get "http://10.97.255.226": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-534748 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-534748 get svc nginx-svc: exit status 1 (61.309575ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-534748 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (125.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-534748 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-534748 create deployment hello-node --image kicbase/echo-server: exit status 1 (54.259825ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-534748 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 service list: exit status 103 (256.5949ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-534748 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-534748"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-534748 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-534748 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-534748\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 service list -o json: exit status 103 (247.561209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-534748 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-534748"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-534748 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 service --namespace=default --https --url hello-node: exit status 103 (251.380446ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-534748 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-534748"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-534748 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 service hello-node --url --format={{.IP}}: exit status 103 (244.685819ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-534748 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-534748"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-534748 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-534748 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-534748\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 service hello-node --url: exit status 103 (264.005541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-534748 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-534748"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-534748 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-534748 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-534748"
functional_test.go:1579: failed to parse "* The control-plane node functional-534748 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-534748\"": parse "* The control-plane node functional-534748 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-534748\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (1.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765349557041216384" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765349557041216384" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765349557041216384" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001/test-1765349557041216384
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 06:52 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 06:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 06:52 test-1765349557041216384
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh cat /mount-9p/test-1765349557041216384
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-534748 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-534748 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (59.69783ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-534748 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (282.099568ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=40207)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 10 06:52 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 10 06:52 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 10 06:52 test-1765349557041216384
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-534748 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:40207
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001:/mount-9p --alsologtostderr -v=1] stderr:
I1210 06:52:37.091015  851874 out.go:360] Setting OutFile to fd 1 ...
I1210 06:52:37.091186  851874 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:37.091206  851874 out.go:374] Setting ErrFile to fd 2...
I1210 06:52:37.091223  851874 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:37.091472  851874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:52:37.091747  851874 mustload.go:66] Loading cluster: functional-534748
I1210 06:52:37.092132  851874 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:37.092658  851874 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:52:37.124686  851874 host.go:66] Checking if "functional-534748" exists ...
I1210 06:52:37.124981  851874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:52:37.190966  851874 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:37.179537474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:52:37.191127  851874 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 06:52:37.211972  851874 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001 into VM as /mount-9p ...
I1210 06:52:37.215074  851874 out.go:179]   - Mount type:   9p
I1210 06:52:37.217945  851874 out.go:179]   - User ID:      docker
I1210 06:52:37.220614  851874 out.go:179]   - Group ID:     docker
I1210 06:52:37.223460  851874 out.go:179]   - Version:      9p2000.L
I1210 06:52:37.226257  851874 out.go:179]   - Message Size: 262144
I1210 06:52:37.229125  851874 out.go:179]   - Options:      map[]
I1210 06:52:37.232073  851874 out.go:179]   - Bind Address: 192.168.49.1:40207
I1210 06:52:37.235053  851874 out.go:179] * Userspace file server: 
I1210 06:52:37.236486  851874 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1210 06:52:37.236562  851874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:52:37.273300  851874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:52:37.385361  851874 mount.go:180] unmount for /mount-9p ran successfully
I1210 06:52:37.385388  851874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1210 06:52:37.393866  851874 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40207,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1210 06:52:37.404603  851874 main.go:127] stdlog: ufs.go:141 connected
I1210 06:52:37.404770  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tversion tag 65535 msize 262144 version '9P2000.L'
I1210 06:52:37.404819  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rversion tag 65535 msize 262144 version '9P2000'
I1210 06:52:37.405049  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1210 06:52:37.405104  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rattach tag 0 aqid (15c3d24 708832d 'd')
I1210 06:52:37.405737  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 0
I1210 06:52:37.405795  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d24 708832d 'd') m d775 at 0 mt 1765349557 l 4096 t 0 d 0 ext )
I1210 06:52:37.407977  851874 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/.mount-process: {Name:mkb53d19113a7d200621c37641c794e9c1599a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:52:37.408174  851874 mount.go:105] mount successful: ""
I1210 06:52:37.411603  851874 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3047742465/001 to /mount-9p
I1210 06:52:37.414554  851874 out.go:203] 
I1210 06:52:37.421695  851874 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1210 06:52:37.682625  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 0
I1210 06:52:37.682719  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d24 708832d 'd') m d775 at 0 mt 1765349557 l 4096 t 0 d 0 ext )
I1210 06:52:37.683069  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 1 
I1210 06:52:37.683106  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 
I1210 06:52:37.683235  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Topen tag 0 fid 1 mode 0
I1210 06:52:37.683282  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Ropen tag 0 qid (15c3d24 708832d 'd') iounit 0
I1210 06:52:37.683416  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 0
I1210 06:52:37.683453  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d24 708832d 'd') m d775 at 0 mt 1765349557 l 4096 t 0 d 0 ext )
I1210 06:52:37.683617  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 1 offset 0 count 262120
I1210 06:52:37.683726  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 258
I1210 06:52:37.683857  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 1 offset 258 count 261862
I1210 06:52:37.683887  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 0
I1210 06:52:37.684016  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:52:37.684041  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 0
I1210 06:52:37.684163  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1210 06:52:37.684194  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 (15c3d25 708832d '') 
I1210 06:52:37.684312  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:37.684343  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3d25 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:37.684459  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:37.684487  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3d25 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:37.684610  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 2
I1210 06:52:37.684645  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:37.684766  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 2 0:'test-1765349557041216384' 
I1210 06:52:37.684796  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 (15c3d27 708832d '') 
I1210 06:52:37.684915  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:37.684944  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('test-1765349557041216384' 'jenkins' 'jenkins' '' q (15c3d27 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:37.685060  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:37.685086  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('test-1765349557041216384' 'jenkins' 'jenkins' '' q (15c3d27 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:37.685210  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 2
I1210 06:52:37.685235  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:37.685354  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1210 06:52:37.685395  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 (15c3d26 708832d '') 
I1210 06:52:37.685522  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:37.685557  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3d26 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:37.685681  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:37.685713  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3d26 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:37.685847  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 2
I1210 06:52:37.685870  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:37.685991  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:52:37.686020  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 0
I1210 06:52:37.686149  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 1
I1210 06:52:37.686181  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:37.942555  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 1 0:'test-1765349557041216384' 
I1210 06:52:37.942630  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 (15c3d27 708832d '') 
I1210 06:52:37.942787  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 1
I1210 06:52:37.942834  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('test-1765349557041216384' 'jenkins' 'jenkins' '' q (15c3d27 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:37.942986  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 1 newfid 2 
I1210 06:52:37.943013  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 
I1210 06:52:37.943124  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Topen tag 0 fid 2 mode 0
I1210 06:52:37.943173  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Ropen tag 0 qid (15c3d27 708832d '') iounit 0
I1210 06:52:37.943300  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 1
I1210 06:52:37.943334  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('test-1765349557041216384' 'jenkins' 'jenkins' '' q (15c3d27 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:37.943468  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 2 offset 0 count 262120
I1210 06:52:37.943508  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 24
I1210 06:52:37.943639  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 2 offset 24 count 262120
I1210 06:52:37.943667  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 0
I1210 06:52:37.943812  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 2 offset 24 count 262120
I1210 06:52:37.943845  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 0
I1210 06:52:37.943993  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 2
I1210 06:52:37.944043  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:37.944252  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 1
I1210 06:52:37.944282  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:38.289207  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 0
I1210 06:52:38.289304  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d24 708832d 'd') m d775 at 0 mt 1765349557 l 4096 t 0 d 0 ext )
I1210 06:52:38.289645  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 1 
I1210 06:52:38.289688  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 
I1210 06:52:38.289825  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Topen tag 0 fid 1 mode 0
I1210 06:52:38.289879  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Ropen tag 0 qid (15c3d24 708832d 'd') iounit 0
I1210 06:52:38.290009  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 0
I1210 06:52:38.290057  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d24 708832d 'd') m d775 at 0 mt 1765349557 l 4096 t 0 d 0 ext )
I1210 06:52:38.290225  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 1 offset 0 count 262120
I1210 06:52:38.290333  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 258
I1210 06:52:38.290454  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 1 offset 258 count 261862
I1210 06:52:38.290499  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 0
I1210 06:52:38.290626  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:52:38.290650  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 0
I1210 06:52:38.290820  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1210 06:52:38.290874  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 (15c3d25 708832d '') 
I1210 06:52:38.291026  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:38.291063  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3d25 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:38.291188  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:38.291219  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3d25 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:38.291349  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 2
I1210 06:52:38.291376  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:38.291545  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 2 0:'test-1765349557041216384' 
I1210 06:52:38.291599  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 (15c3d27 708832d '') 
I1210 06:52:38.291749  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:38.291800  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('test-1765349557041216384' 'jenkins' 'jenkins' '' q (15c3d27 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:38.291922  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:38.291963  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('test-1765349557041216384' 'jenkins' 'jenkins' '' q (15c3d27 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:38.292087  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 2
I1210 06:52:38.292112  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:38.292248  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1210 06:52:38.292298  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rwalk tag 0 (15c3d26 708832d '') 
I1210 06:52:38.292420  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:38.292454  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3d26 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:38.292568  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tstat tag 0 fid 2
I1210 06:52:38.292598  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3d26 708832d '') m 644 at 0 mt 1765349557 l 24 t 0 d 0 ext )
I1210 06:52:38.292706  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 2
I1210 06:52:38.292738  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:38.292853  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:52:38.292885  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rread tag 0 count 0
I1210 06:52:38.293052  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 1
I1210 06:52:38.293101  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:38.294265  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1210 06:52:38.294354  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rerror tag 0 ename 'file not found' ecode 0
I1210 06:52:38.547717  851874 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55690 Tclunk tag 0 fid 0
I1210 06:52:38.547765  851874 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55690 Rclunk tag 0
I1210 06:52:38.548894  851874 main.go:127] stdlog: ufs.go:147 disconnected
I1210 06:52:38.570311  851874 out.go:179] * Unmounting /mount-9p ...
I1210 06:52:38.573266  851874 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1210 06:52:38.580146  851874 mount.go:180] unmount for /mount-9p ran successfully
I1210 06:52:38.580255  851874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/.mount-process: {Name:mkb53d19113a7d200621c37641c794e9c1599a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:52:38.583368  851874 out.go:203] 
W1210 06:52:38.586294  851874 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1210 06:52:38.589162  851874 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (1.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (804.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-006690 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-006690 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.189300753s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-006690
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-006690: (1.728296833s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-006690 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-006690 status --format={{.Host}}: exit status 7 (87.468051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-006690 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1210 07:22:35.782400  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-006690 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m35.268196456s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-006690] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-006690" primary control-plane node in "kubernetes-upgrade-006690" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:22:17.193170  984872 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:22:17.193273  984872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:22:17.193279  984872 out.go:374] Setting ErrFile to fd 2...
	I1210 07:22:17.193283  984872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:22:17.193635  984872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:22:17.194045  984872 out.go:368] Setting JSON to false
	I1210 07:22:17.195256  984872 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21862,"bootTime":1765329476,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:22:17.195322  984872 start.go:143] virtualization:  
	I1210 07:22:17.198962  984872 out.go:179] * [kubernetes-upgrade-006690] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:22:17.202896  984872 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:22:17.203059  984872 notify.go:221] Checking for updates...
	I1210 07:22:17.209215  984872 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:22:17.212326  984872 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:22:17.215499  984872 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:22:17.218233  984872 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:22:17.221054  984872 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:22:17.224336  984872 config.go:182] Loaded profile config "kubernetes-upgrade-006690": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1210 07:22:17.224994  984872 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:22:17.261444  984872 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:22:17.261565  984872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:22:17.339293  984872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-10 07:22:17.329109847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:22:17.339403  984872 docker.go:319] overlay module found
	I1210 07:22:17.342682  984872 out.go:179] * Using the docker driver based on existing profile
	I1210 07:22:17.345780  984872 start.go:309] selected driver: docker
	I1210 07:22:17.345799  984872 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-006690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-006690 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:22:17.345898  984872 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:22:17.346616  984872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:22:17.416381  984872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-10 07:22:17.407605158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:22:17.416730  984872 cni.go:84] Creating CNI manager for ""
	I1210 07:22:17.416807  984872 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:22:17.416854  984872 start.go:353] cluster config:
	{Name:kubernetes-upgrade-006690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-006690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:22:17.420127  984872 out.go:179] * Starting "kubernetes-upgrade-006690" primary control-plane node in "kubernetes-upgrade-006690" cluster
	I1210 07:22:17.423026  984872 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:22:17.426016  984872 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:22:17.429121  984872 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:22:17.429176  984872 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:22:17.429187  984872 cache.go:65] Caching tarball of preloaded images
	I1210 07:22:17.429223  984872 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:22:17.429282  984872 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:22:17.429293  984872 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:22:17.429400  984872 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/config.json ...
	I1210 07:22:17.448208  984872 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:22:17.448232  984872 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:22:17.448246  984872 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:22:17.448275  984872 start.go:360] acquireMachinesLock for kubernetes-upgrade-006690: {Name:mk3a2468a3021560cedf47def548cd3dc6e1aad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:22:17.448331  984872 start.go:364] duration metric: took 35.019µs to acquireMachinesLock for "kubernetes-upgrade-006690"
	I1210 07:22:17.448355  984872 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:22:17.448361  984872 fix.go:54] fixHost starting: 
	I1210 07:22:17.448625  984872 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-006690 --format={{.State.Status}}
	I1210 07:22:17.465420  984872 fix.go:112] recreateIfNeeded on kubernetes-upgrade-006690: state=Stopped err=<nil>
	W1210 07:22:17.465451  984872 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:22:17.468857  984872 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-006690" ...
	I1210 07:22:17.468936  984872 cli_runner.go:164] Run: docker start kubernetes-upgrade-006690
	I1210 07:22:17.734069  984872 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-006690 --format={{.State.Status}}
	I1210 07:22:17.752398  984872 kic.go:430] container "kubernetes-upgrade-006690" state is running.
	I1210 07:22:17.752791  984872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-006690
	I1210 07:22:17.774988  984872 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/config.json ...
	I1210 07:22:17.775212  984872 machine.go:94] provisionDockerMachine start ...
	I1210 07:22:17.775274  984872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-006690
	I1210 07:22:17.795270  984872 main.go:143] libmachine: Using SSH client type: native
	I1210 07:22:17.795583  984872 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33760 <nil> <nil>}
	I1210 07:22:17.795593  984872 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:22:17.796352  984872 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:22:20.942125  984872 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-006690
	
	I1210 07:22:20.942149  984872 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-006690"
	I1210 07:22:20.942213  984872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-006690
	I1210 07:22:20.960452  984872 main.go:143] libmachine: Using SSH client type: native
	I1210 07:22:20.960768  984872 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33760 <nil> <nil>}
	I1210 07:22:20.960784  984872 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-006690 && echo "kubernetes-upgrade-006690" | sudo tee /etc/hostname
	I1210 07:22:21.127355  984872 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-006690
	
	I1210 07:22:21.127533  984872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-006690
	I1210 07:22:21.149050  984872 main.go:143] libmachine: Using SSH client type: native
	I1210 07:22:21.149363  984872 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33760 <nil> <nil>}
	I1210 07:22:21.149380  984872 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-006690' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-006690/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-006690' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:22:21.296617  984872 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:22:21.296648  984872 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:22:21.296679  984872 ubuntu.go:190] setting up certificates
	I1210 07:22:21.296696  984872 provision.go:84] configureAuth start
	I1210 07:22:21.296759  984872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-006690
	I1210 07:22:21.328204  984872 provision.go:143] copyHostCerts
	I1210 07:22:21.328276  984872 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:22:21.328285  984872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:22:21.328342  984872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:22:21.328444  984872 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:22:21.328449  984872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:22:21.328471  984872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:22:21.328528  984872 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:22:21.328533  984872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:22:21.328551  984872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:22:21.328604  984872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-006690 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-006690 localhost minikube]
	I1210 07:22:21.574789  984872 provision.go:177] copyRemoteCerts
	I1210 07:22:21.574863  984872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:22:21.574911  984872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-006690
	I1210 07:22:21.596271  984872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33760 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kubernetes-upgrade-006690/id_rsa Username:docker}
	I1210 07:22:21.696310  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:22:21.717432  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 07:22:21.739522  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:22:21.762639  984872 provision.go:87] duration metric: took 465.9159ms to configureAuth
	I1210 07:22:21.762664  984872 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:22:21.762856  984872 config.go:182] Loaded profile config "kubernetes-upgrade-006690": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:22:21.762863  984872 machine.go:97] duration metric: took 3.987643715s to provisionDockerMachine
	I1210 07:22:21.762871  984872 start.go:293] postStartSetup for "kubernetes-upgrade-006690" (driver="docker")
	I1210 07:22:21.762882  984872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:22:21.762929  984872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:22:21.762966  984872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-006690
	I1210 07:22:21.797069  984872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33760 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kubernetes-upgrade-006690/id_rsa Username:docker}
	I1210 07:22:21.906907  984872 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:22:21.910313  984872 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:22:21.910343  984872 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:22:21.910355  984872 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:22:21.910409  984872 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:22:21.910522  984872 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:22:21.910641  984872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:22:21.921223  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:22:21.941820  984872 start.go:296] duration metric: took 178.934708ms for postStartSetup
	I1210 07:22:21.941985  984872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:22:21.942044  984872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-006690
	I1210 07:22:21.972223  984872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33760 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kubernetes-upgrade-006690/id_rsa Username:docker}
	I1210 07:22:22.070139  984872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:22:22.076226  984872 fix.go:56] duration metric: took 4.627858212s for fixHost
	I1210 07:22:22.076294  984872 start.go:83] releasing machines lock for "kubernetes-upgrade-006690", held for 4.627948576s
	I1210 07:22:22.076398  984872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-006690
	I1210 07:22:22.096630  984872 ssh_runner.go:195] Run: cat /version.json
	I1210 07:22:22.096695  984872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-006690
	I1210 07:22:22.096968  984872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:22:22.097033  984872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-006690
	I1210 07:22:22.138800  984872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33760 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kubernetes-upgrade-006690/id_rsa Username:docker}
	I1210 07:22:22.155491  984872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33760 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kubernetes-upgrade-006690/id_rsa Username:docker}
	I1210 07:22:22.255460  984872 ssh_runner.go:195] Run: systemctl --version
	I1210 07:22:22.381291  984872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:22:22.386521  984872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:22:22.386598  984872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:22:22.397757  984872 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:22:22.397782  984872 start.go:496] detecting cgroup driver to use...
	I1210 07:22:22.397843  984872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:22:22.397913  984872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:22:22.421377  984872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:22:22.443091  984872 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:22:22.443166  984872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:22:22.466897  984872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:22:22.487203  984872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:22:22.665318  984872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:22:22.844877  984872 docker.go:234] disabling docker service ...
	I1210 07:22:22.844951  984872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:22:22.862608  984872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:22:22.877830  984872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:22:23.019024  984872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:22:23.165691  984872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:22:23.181636  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:22:23.197153  984872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:22:23.206505  984872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:22:23.215988  984872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:22:23.216053  984872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:22:23.225487  984872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:22:23.234978  984872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:22:23.244119  984872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:22:23.253329  984872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:22:23.261888  984872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:22:23.271414  984872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:22:23.281123  984872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:22:23.291087  984872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:22:23.299793  984872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:22:23.308037  984872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:22:23.448852  984872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:22:23.706241  984872 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:22:23.706309  984872 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:22:23.711516  984872 start.go:564] Will wait 60s for crictl version
	I1210 07:22:23.711576  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:23.715735  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:22:23.745127  984872 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:22:23.745277  984872 ssh_runner.go:195] Run: containerd --version
	I1210 07:22:23.773947  984872 ssh_runner.go:195] Run: containerd --version
	I1210 07:22:23.801330  984872 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:22:23.804247  984872 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-006690 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:22:23.833090  984872 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:22:23.837390  984872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:22:23.847202  984872 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-006690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-006690 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:22:23.847325  984872 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:22:23.847396  984872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:22:23.874304  984872 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1210 07:22:23.874378  984872 ssh_runner.go:195] Run: which lz4
	I1210 07:22:23.878515  984872 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 07:22:23.882754  984872 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 07:22:23.882788  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1210 07:22:27.068674  984872 containerd.go:563] duration metric: took 3.190216508s to copy over tarball
	I1210 07:22:27.068745  984872 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 07:22:29.619020  984872 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.55025131s)
	I1210 07:22:29.619169  984872 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1210 07:22:29.619293  984872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:22:29.666907  984872 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1210 07:22:29.666979  984872 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:22:29.667076  984872 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:22:29.667377  984872 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:22:29.667566  984872 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:22:29.667740  984872 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:22:29.667949  984872 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:22:29.668134  984872 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:22:29.668313  984872 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:22:29.668486  984872 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:22:29.669895  984872 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:22:29.670037  984872 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:22:29.670230  984872 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:22:29.671177  984872 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:22:29.671685  984872 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:22:29.671858  984872 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:22:29.672022  984872 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:22:29.672341  984872 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:22:30.010852  984872 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 07:22:30.015063  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 07:22:30.028696  984872 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1210 07:22:30.028853  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1210 07:22:30.046287  984872 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1210 07:22:30.046366  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:22:30.049025  984872 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1210 07:22:30.049110  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:22:30.056935  984872 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1210 07:22:30.057069  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:22:30.078640  984872 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1210 07:22:30.078795  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:22:30.137448  984872 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 07:22:30.137575  984872 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:22:30.137668  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:30.142151  984872 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1210 07:22:30.142257  984872 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:22:30.142347  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:30.158363  984872 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 07:22:30.158534  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:22:30.166544  984872 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1210 07:22:30.166657  984872 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:22:30.166742  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:30.191215  984872 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1210 07:22:30.191311  984872 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:22:30.191398  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:30.199559  984872 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1210 07:22:30.199658  984872 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:22:30.199748  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:30.199898  984872 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1210 07:22:30.199944  984872 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:22:30.200011  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:30.200153  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:22:30.200277  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:22:30.239943  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:22:30.240115  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:22:30.240231  984872 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 07:22:30.240282  984872 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:22:30.240346  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:30.282913  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:22:30.282983  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:22:30.283019  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:22:30.283263  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:22:30.419627  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:22:30.419717  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:22:30.419769  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:22:30.473266  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:22:30.473389  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:22:30.473443  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:22:30.473508  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:22:30.631989  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:22:30.632076  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:22:30.632144  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:22:30.674187  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:22:30.674267  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:22:30.674322  984872 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1210 07:22:30.674408  984872 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:22:30.674480  984872 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 07:22:30.674527  984872 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:22:30.811178  984872 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1210 07:22:30.811257  984872 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1210 07:22:30.811320  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:22:30.822886  984872 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:22:30.822924  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 07:22:30.822990  984872 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1210 07:22:30.823033  984872 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1210 07:22:30.823066  984872 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:22:30.823080  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	W1210 07:22:30.915988  984872 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 07:22:30.916315  984872 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 07:22:30.916418  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:22:30.924461  984872 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:22:30.924597  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1210 07:22:30.931021  984872 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 07:22:31.037237  984872 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 07:22:31.037278  984872 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:22:31.037330  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:31.139170  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:22:31.139193  984872 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:22:31.139375  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:22:32.981900  984872 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.84247352s)
	I1210 07:22:32.981928  984872 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 07:22:32.981960  984872 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.842712472s)
	I1210 07:22:32.981969  984872 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 07:22:32.982051  984872 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:22:32.987304  984872 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:22:32.987341  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 07:22:33.100061  984872 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:22:33.100132  984872 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:22:33.630918  984872 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 07:22:33.631045  984872 cache_images.go:94] duration metric: took 3.964028078s to LoadCachedImages
	W1210 07:22:33.631162  984872 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0: no such file or directory
	I1210 07:22:33.631326  984872 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:22:33.631460  984872 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-006690 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-006690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:22:33.631554  984872 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:22:33.668734  984872 cni.go:84] Creating CNI manager for ""
	I1210 07:22:33.668819  984872 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:22:33.668882  984872 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:22:33.668923  984872 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-006690 NodeName:kubernetes-upgrade-006690 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:22:33.669099  984872 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-006690"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:22:33.669212  984872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:22:33.679172  984872 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:22:33.679321  984872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:22:33.687925  984872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1210 07:22:33.703135  984872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:22:33.717760  984872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1210 07:22:33.732749  984872 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:22:33.736685  984872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:22:33.747285  984872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:22:33.894106  984872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:22:33.912325  984872 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690 for IP: 192.168.76.2
	I1210 07:22:33.912350  984872 certs.go:195] generating shared ca certs ...
	I1210 07:22:33.912367  984872 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:22:33.912513  984872 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:22:33.912570  984872 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:22:33.912583  984872 certs.go:257] generating profile certs ...
	I1210 07:22:33.912672  984872 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/client.key
	I1210 07:22:33.912733  984872 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/apiserver.key.685da8f7
	I1210 07:22:33.912780  984872 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/proxy-client.key
	I1210 07:22:33.912896  984872 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:22:33.912933  984872 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:22:33.912946  984872 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:22:33.912973  984872 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:22:33.913002  984872 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:22:33.913031  984872 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:22:33.913079  984872 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:22:33.913705  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:22:33.941682  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:22:33.968366  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:22:34.020285  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:22:34.044147  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 07:22:34.077971  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:22:34.103634  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:22:34.131423  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:22:34.151159  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:22:34.171301  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:22:34.191253  984872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:22:34.211000  984872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:22:34.224829  984872 ssh_runner.go:195] Run: openssl version
	I1210 07:22:34.233866  984872 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:22:34.243054  984872 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:22:34.251815  984872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:22:34.256774  984872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:22:34.256897  984872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:22:34.299919  984872 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:22:34.308010  984872 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:22:34.315760  984872 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:22:34.323931  984872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:22:34.328283  984872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:22:34.328377  984872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:22:34.370299  984872 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:22:34.378335  984872 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:22:34.386076  984872 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:22:34.394151  984872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:22:34.400922  984872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:22:34.401023  984872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:22:34.443682  984872 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:22:34.451643  984872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:22:34.456388  984872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:22:34.500500  984872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:22:34.548932  984872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:22:34.598097  984872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:22:34.647525  984872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:22:34.694505  984872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:22:34.737456  984872 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-006690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-006690 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:22:34.737541  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:22:34.737603  984872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:22:34.766502  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:22:34.766530  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:22:34.766536  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:22:34.766540  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:22:34.766544  984872 cri.go:89] found id: ""
	I1210 07:22:34.766595  984872 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1210 07:22:34.799822  984872 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:22:34Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1210 07:22:34.799915  984872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:22:34.811777  984872 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:22:34.811795  984872 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:22:34.811849  984872 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:22:34.824237  984872 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:22:34.824636  984872 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-006690" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:22:34.824744  984872 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-006690" cluster setting kubeconfig missing "kubernetes-upgrade-006690" context setting]
	I1210 07:22:34.825030  984872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:22:34.825579  984872 kapi.go:59] client config for kubernetes-upgrade-006690: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/client.key", CAFile:"/home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:22:34.826071  984872 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:22:34.826089  984872 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:22:34.826094  984872 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:22:34.826100  984872 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:22:34.826203  984872 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:22:34.826501  984872 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:22:34.843836  984872 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 07:21:49.627737275 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 07:22:33.728290612 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-006690"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1210 07:22:34.843878  984872 kubeadm.go:1161] stopping kube-system containers ...
	I1210 07:22:34.843891  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 07:22:34.843952  984872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:22:34.928240  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:22:34.928313  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:22:34.928339  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:22:34.928364  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:22:34.928394  984872 cri.go:89] found id: ""
	I1210 07:22:34.928432  984872 cri.go:252] Stopping containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:22:34.928537  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:22:34.936864  984872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4
	I1210 07:22:34.986805  984872 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 07:22:35.012174  984872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:22:35.025336  984872 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 10 07:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 10 07:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 10 07:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 10 07:21 /etc/kubernetes/scheduler.conf
	
	I1210 07:22:35.025464  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:22:35.047726  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:22:35.062301  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:22:35.081443  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:22:35.081592  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:22:35.099206  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:22:35.130370  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:22:35.130540  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:22:35.143076  984872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:22:35.157402  984872 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:22:35.266615  984872 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:22:37.022167  984872 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.755513232s)
	I1210 07:22:37.022336  984872 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:22:37.245803  984872 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:22:37.300825  984872 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:22:37.343336  984872 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:22:37.343449  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:37.843645  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:38.343571  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:38.843559  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:39.344464  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:39.844141  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:40.344319  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:40.844226  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:41.343526  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:41.843560  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:42.343557  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:42.843558  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:43.343546  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:43.843565  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:44.343561  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:44.843853  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:45.343552  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:45.843598  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:46.343974  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:46.844100  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:47.343496  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:47.843640  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:48.344284  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:48.843621  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:49.343558  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:49.843551  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:50.343840  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:50.843562  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:51.343979  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:51.843746  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:52.344487  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:52.843647  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:53.344221  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:53.843573  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:54.343588  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:54.844384  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:55.344480  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:55.843535  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:56.344239  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:56.844208  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:57.343640  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:57.843555  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:58.344305  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:58.843586  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:59.343645  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:59.843842  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:00.344485  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:00.843514  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:01.343550  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:01.843485  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:02.343598  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:02.843647  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:03.343578  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:03.843599  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:04.343747  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:04.844168  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:05.344549  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:05.843532  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:06.344112  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:06.843797  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:07.343712  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:07.843845  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:08.343795  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:08.843823  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:09.343678  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:09.844390  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:10.344266  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:10.843600  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:11.343531  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:11.843930  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:12.344320  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:12.843588  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:13.343948  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:13.843989  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:14.344416  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:14.844481  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:15.343574  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:15.844325  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:16.343816  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:16.843916  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:17.344345  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:17.844190  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:18.343570  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:18.843572  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:19.343590  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:19.844457  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:20.343498  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:20.843815  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:21.344232  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:21.843622  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:22.343577  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:22.843724  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:23.344261  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:23.844301  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:24.343636  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:24.844456  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:25.344514  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:25.844409  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:26.343621  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:26.844560  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:27.343875  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:27.843653  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:28.344249  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:28.843676  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:29.343662  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:29.844385  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:30.343960  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:30.843799  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:31.344438  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:31.843629  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:32.344495  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:32.843908  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:33.344389  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:33.844323  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:34.343564  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:34.844271  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:35.343644  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:35.843684  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:36.344561  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:36.844235  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:37.344012  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:37.344108  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:37.387016  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:37.387041  984872 cri.go:89] found id: ""
	I1210 07:23:37.387050  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:23:37.387105  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:37.391261  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:23:37.391337  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:37.460853  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:37.460881  984872 cri.go:89] found id: ""
	I1210 07:23:37.460891  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:23:37.460961  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:37.465330  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:23:37.465411  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:37.512904  984872 cri.go:89] found id: ""
	I1210 07:23:37.512934  984872 logs.go:282] 0 containers: []
	W1210 07:23:37.512943  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:23:37.512950  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:37.513014  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:37.557417  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:37.557445  984872 cri.go:89] found id: ""
	I1210 07:23:37.557454  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:23:37.557516  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:37.565378  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:37.565461  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:37.604757  984872 cri.go:89] found id: ""
	I1210 07:23:37.604843  984872 logs.go:282] 0 containers: []
	W1210 07:23:37.604866  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:37.604887  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:37.605012  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:37.655714  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:37.655784  984872 cri.go:89] found id: ""
	I1210 07:23:37.655821  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:23:37.655920  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:37.661195  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:37.661319  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:37.716987  984872 cri.go:89] found id: ""
	I1210 07:23:37.717065  984872 logs.go:282] 0 containers: []
	W1210 07:23:37.717095  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:37.717115  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:37.717224  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:37.765940  984872 cri.go:89] found id: ""
	I1210 07:23:37.766018  984872 logs.go:282] 0 containers: []
	W1210 07:23:37.766053  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:37.766084  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:37.766134  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:37.859014  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:37.859036  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:23:37.859050  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:37.909081  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:23:37.909116  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:23:37.956745  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:23:37.956783  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:38.027872  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:23:38.027967  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:38.095394  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:23:38.095470  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:38.153551  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:23:38.153634  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:38.207399  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:38.207479  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:38.299282  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:38.299362  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:40.832292  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:40.843322  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:40.843391  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:40.879409  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:40.879432  984872 cri.go:89] found id: ""
	I1210 07:23:40.879442  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:23:40.879507  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:40.884854  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:23:40.884947  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:40.918685  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:40.918710  984872 cri.go:89] found id: ""
	I1210 07:23:40.918719  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:23:40.918774  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:40.923136  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:23:40.923220  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:40.955930  984872 cri.go:89] found id: ""
	I1210 07:23:40.955958  984872 logs.go:282] 0 containers: []
	W1210 07:23:40.955968  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:23:40.955977  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:40.956038  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:40.982781  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:40.982815  984872 cri.go:89] found id: ""
	I1210 07:23:40.982825  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:23:40.982910  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:40.986566  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:40.986641  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:41.013887  984872 cri.go:89] found id: ""
	I1210 07:23:41.013960  984872 logs.go:282] 0 containers: []
	W1210 07:23:41.013984  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:41.014004  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:41.014104  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:41.040915  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:41.040980  984872 cri.go:89] found id: ""
	I1210 07:23:41.041004  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:23:41.041090  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:41.044764  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:41.044877  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:41.069349  984872 cri.go:89] found id: ""
	I1210 07:23:41.069377  984872 logs.go:282] 0 containers: []
	W1210 07:23:41.069396  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:41.069402  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:41.069475  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:41.096472  984872 cri.go:89] found id: ""
	I1210 07:23:41.096538  984872 logs.go:282] 0 containers: []
	W1210 07:23:41.096552  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:41.096566  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:23:41.096578  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:41.130810  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:23:41.130846  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:41.157223  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:23:41.157252  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:41.188230  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:23:41.188263  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:23:41.217675  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:23:41.217711  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:41.246272  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:41.246299  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:41.306107  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:41.306184  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:41.325846  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:23:41.325876  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:41.362537  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:41.362612  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:41.425904  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:43.926175  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:43.936009  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:43.936084  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:43.960211  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:43.960233  984872 cri.go:89] found id: ""
	I1210 07:23:43.960241  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:23:43.960318  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:43.963930  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:23:43.964011  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:43.989059  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:43.989079  984872 cri.go:89] found id: ""
	I1210 07:23:43.989087  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:23:43.989143  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:43.992741  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:23:43.992816  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:44.020324  984872 cri.go:89] found id: ""
	I1210 07:23:44.020352  984872 logs.go:282] 0 containers: []
	W1210 07:23:44.020362  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:23:44.020368  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:44.020432  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:44.045998  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:44.046021  984872 cri.go:89] found id: ""
	I1210 07:23:44.046030  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:23:44.046088  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:44.049986  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:44.050063  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:44.075546  984872 cri.go:89] found id: ""
	I1210 07:23:44.075569  984872 logs.go:282] 0 containers: []
	W1210 07:23:44.075578  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:44.075584  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:44.075661  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:44.101509  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:44.101531  984872 cri.go:89] found id: ""
	I1210 07:23:44.101541  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:23:44.101594  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:44.105163  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:44.105235  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:44.136038  984872 cri.go:89] found id: ""
	I1210 07:23:44.136060  984872 logs.go:282] 0 containers: []
	W1210 07:23:44.136069  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:44.136075  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:44.136135  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:44.164567  984872 cri.go:89] found id: ""
	I1210 07:23:44.164644  984872 logs.go:282] 0 containers: []
	W1210 07:23:44.164662  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:44.164676  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:44.164688  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:44.222228  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:44.222263  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:44.239023  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:23:44.239053  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:44.266433  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:23:44.266490  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:44.297583  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:44.297611  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:44.384928  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:44.384953  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:23:44.384967  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:44.431590  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:23:44.431623  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:44.466842  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:23:44.466876  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:44.498724  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:23:44.498763  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:23:47.030512  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:47.040307  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:47.040397  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:47.065258  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:47.065278  984872 cri.go:89] found id: ""
	I1210 07:23:47.065287  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:23:47.065378  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:47.069111  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:23:47.069232  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:47.094561  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:47.094588  984872 cri.go:89] found id: ""
	I1210 07:23:47.094597  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:23:47.094656  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:47.098174  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:23:47.098249  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:47.122897  984872 cri.go:89] found id: ""
	I1210 07:23:47.122919  984872 logs.go:282] 0 containers: []
	W1210 07:23:47.122929  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:23:47.122936  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:47.122994  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:47.147455  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:47.147476  984872 cri.go:89] found id: ""
	I1210 07:23:47.147484  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:23:47.147538  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:47.151039  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:47.151148  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:47.179165  984872 cri.go:89] found id: ""
	I1210 07:23:47.179190  984872 logs.go:282] 0 containers: []
	W1210 07:23:47.179199  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:47.179206  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:47.179280  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:47.205192  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:47.205217  984872 cri.go:89] found id: ""
	I1210 07:23:47.205234  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:23:47.205292  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:47.209259  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:47.209333  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:47.234056  984872 cri.go:89] found id: ""
	I1210 07:23:47.234080  984872 logs.go:282] 0 containers: []
	W1210 07:23:47.234088  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:47.234095  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:47.234177  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:47.260087  984872 cri.go:89] found id: ""
	I1210 07:23:47.260110  984872 logs.go:282] 0 containers: []
	W1210 07:23:47.260120  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:47.260153  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:23:47.260172  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:47.312819  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:47.312899  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:47.379031  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:23:47.379069  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:47.431622  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:23:47.431658  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:47.462785  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:23:47.462821  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:23:47.491875  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:47.491910  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:47.507894  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:47.507928  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:47.572377  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:47.572396  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:23:47.572410  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:47.613739  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:23:47.613768  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:50.140587  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:50.151027  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:50.151102  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:50.177989  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:50.178014  984872 cri.go:89] found id: ""
	I1210 07:23:50.178023  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:23:50.178082  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:50.181746  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:23:50.181845  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:50.212899  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:50.212924  984872 cri.go:89] found id: ""
	I1210 07:23:50.212935  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:23:50.212992  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:50.216822  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:23:50.216913  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:50.242344  984872 cri.go:89] found id: ""
	I1210 07:23:50.242370  984872 logs.go:282] 0 containers: []
	W1210 07:23:50.242380  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:23:50.242386  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:50.242527  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:50.267672  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:50.267738  984872 cri.go:89] found id: ""
	I1210 07:23:50.267762  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:23:50.267865  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:50.271477  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:50.271553  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:50.299979  984872 cri.go:89] found id: ""
	I1210 07:23:50.300046  984872 logs.go:282] 0 containers: []
	W1210 07:23:50.300069  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:50.300090  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:50.300176  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:50.332453  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:50.332519  984872 cri.go:89] found id: ""
	I1210 07:23:50.332543  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:23:50.332632  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:50.337575  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:50.337689  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:50.372969  984872 cri.go:89] found id: ""
	I1210 07:23:50.373056  984872 logs.go:282] 0 containers: []
	W1210 07:23:50.373141  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:50.373167  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:50.373257  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:50.398904  984872 cri.go:89] found id: ""
	I1210 07:23:50.398971  984872 logs.go:282] 0 containers: []
	W1210 07:23:50.398986  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:50.399001  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:50.399019  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:50.464725  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:50.464744  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:23:50.464758  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:50.504332  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:23:50.504371  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:50.536508  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:23:50.536544  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:50.578295  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:23:50.578326  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:23:50.608731  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:50.608767  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:50.625449  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:23:50.625479  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:50.656740  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:23:50.656768  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:50.685780  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:50.685808  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:53.243819  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:53.253868  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:53.253944  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:53.278676  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:53.278707  984872 cri.go:89] found id: ""
	I1210 07:23:53.278718  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:23:53.278778  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:53.282912  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:23:53.283017  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:53.322496  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:53.322521  984872 cri.go:89] found id: ""
	I1210 07:23:53.322530  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:23:53.322589  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:53.327051  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:23:53.327129  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:53.355594  984872 cri.go:89] found id: ""
	I1210 07:23:53.355618  984872 logs.go:282] 0 containers: []
	W1210 07:23:53.355626  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:23:53.355632  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:53.355694  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:53.386391  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:53.386413  984872 cri.go:89] found id: ""
	I1210 07:23:53.386423  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:23:53.386509  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:53.390336  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:53.390419  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:53.417290  984872 cri.go:89] found id: ""
	I1210 07:23:53.417314  984872 logs.go:282] 0 containers: []
	W1210 07:23:53.417323  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:53.417329  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:53.417392  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:53.443507  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:53.443530  984872 cri.go:89] found id: ""
	I1210 07:23:53.443539  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:23:53.443618  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:53.447419  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:53.447499  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:53.473240  984872 cri.go:89] found id: ""
	I1210 07:23:53.473263  984872 logs.go:282] 0 containers: []
	W1210 07:23:53.473279  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:53.473286  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:53.473350  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:53.501191  984872 cri.go:89] found id: ""
	I1210 07:23:53.501215  984872 logs.go:282] 0 containers: []
	W1210 07:23:53.501224  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:53.501240  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:53.501253  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:53.518963  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:23:53.518997  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:53.556339  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:23:53.556370  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:23:53.595395  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:53.595432  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:53.657734  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:53.657775  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:53.725970  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:53.726000  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:23:53.726015  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:53.760116  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:23:53.760150  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:53.806308  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:23:53.806341  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:53.838622  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:23:53.838652  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:56.389695  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:56.400446  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:56.400524  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:56.427343  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:56.427366  984872 cri.go:89] found id: ""
	I1210 07:23:56.427375  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:23:56.427448  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:56.431123  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:23:56.431201  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:56.457138  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:56.457162  984872 cri.go:89] found id: ""
	I1210 07:23:56.457170  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:23:56.457232  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:56.460997  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:23:56.461080  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:56.486325  984872 cri.go:89] found id: ""
	I1210 07:23:56.486355  984872 logs.go:282] 0 containers: []
	W1210 07:23:56.486364  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:23:56.486371  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:56.486457  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:56.511315  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:56.511341  984872 cri.go:89] found id: ""
	I1210 07:23:56.511350  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:23:56.511433  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:56.515149  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:56.515247  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:56.540050  984872 cri.go:89] found id: ""
	I1210 07:23:56.540073  984872 logs.go:282] 0 containers: []
	W1210 07:23:56.540081  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:56.540088  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:56.540171  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:56.565275  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:56.565295  984872 cri.go:89] found id: ""
	I1210 07:23:56.565303  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:23:56.565379  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:56.569534  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:56.569649  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:56.594273  984872 cri.go:89] found id: ""
	I1210 07:23:56.594347  984872 logs.go:282] 0 containers: []
	W1210 07:23:56.594371  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:56.594390  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:56.594515  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:56.618894  984872 cri.go:89] found id: ""
	I1210 07:23:56.618970  984872 logs.go:282] 0 containers: []
	W1210 07:23:56.618984  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:56.618999  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:56.619013  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:56.676096  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:56.676131  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:56.692624  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:56.692653  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:56.759911  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:56.759977  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:23:56.760006  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:56.797412  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:23:56.797441  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:56.824631  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:23:56.824659  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:56.855155  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:23:56.855187  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:23:56.884955  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:23:56.884990  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:56.922073  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:23:56.922152  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:59.458372  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:59.469068  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:59.469198  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:59.497614  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:59.497640  984872 cri.go:89] found id: ""
	I1210 07:23:59.497649  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:23:59.497708  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:59.501502  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:23:59.501583  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:59.527825  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:59.527848  984872 cri.go:89] found id: ""
	I1210 07:23:59.527857  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:23:59.527924  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:59.531664  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:23:59.531750  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:59.557366  984872 cri.go:89] found id: ""
	I1210 07:23:59.557390  984872 logs.go:282] 0 containers: []
	W1210 07:23:59.557398  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:23:59.557406  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:59.557466  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:59.592366  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:23:59.592388  984872 cri.go:89] found id: ""
	I1210 07:23:59.592397  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:23:59.592453  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:59.596205  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:59.596281  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:59.620791  984872 cri.go:89] found id: ""
	I1210 07:23:59.620816  984872 logs.go:282] 0 containers: []
	W1210 07:23:59.620825  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:59.620835  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:59.620896  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:59.647637  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:23:59.647711  984872 cri.go:89] found id: ""
	I1210 07:23:59.647734  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:23:59.647822  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:23:59.651768  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:59.651873  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:59.677546  984872 cri.go:89] found id: ""
	I1210 07:23:59.677574  984872 logs.go:282] 0 containers: []
	W1210 07:23:59.677583  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:59.677589  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:59.677654  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:59.703369  984872 cri.go:89] found id: ""
	I1210 07:23:59.703395  984872 logs.go:282] 0 containers: []
	W1210 07:23:59.703404  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:59.703420  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:59.703434  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:59.720432  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:23:59.720468  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:23:59.764822  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:23:59.764853  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:23:59.807365  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:23:59.807400  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:23:59.838091  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:23:59.838123  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:59.867424  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:59.867452  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:59.927982  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:59.928019  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:59.997934  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:59.998014  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:23:59.998036  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:00.073760  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:00.073793  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:02.672960  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:02.683098  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:02.683168  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:02.707728  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:02.707751  984872 cri.go:89] found id: ""
	I1210 07:24:02.707760  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:02.707821  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:02.711578  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:02.711654  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:02.739741  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:02.739768  984872 cri.go:89] found id: ""
	I1210 07:24:02.739777  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:02.739835  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:02.743644  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:02.743731  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:02.769215  984872 cri.go:89] found id: ""
	I1210 07:24:02.769239  984872 logs.go:282] 0 containers: []
	W1210 07:24:02.769248  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:02.769254  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:02.769319  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:02.797446  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:02.797468  984872 cri.go:89] found id: ""
	I1210 07:24:02.797476  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:02.797535  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:02.801641  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:02.801713  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:02.827164  984872 cri.go:89] found id: ""
	I1210 07:24:02.827187  984872 logs.go:282] 0 containers: []
	W1210 07:24:02.827195  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:02.827203  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:02.827268  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:02.853160  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:02.853184  984872 cri.go:89] found id: ""
	I1210 07:24:02.853193  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:02.853251  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:02.857010  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:02.857092  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:02.883086  984872 cri.go:89] found id: ""
	I1210 07:24:02.883110  984872 logs.go:282] 0 containers: []
	W1210 07:24:02.883119  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:02.883126  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:02.883192  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:02.908764  984872 cri.go:89] found id: ""
	I1210 07:24:02.908794  984872 logs.go:282] 0 containers: []
	W1210 07:24:02.908803  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:02.908819  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:02.908831  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:02.941418  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:02.941450  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:03.000668  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:03.000709  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:03.026405  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:03.026436  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:03.062976  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:03.063015  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:03.091959  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:03.091991  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:03.155982  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:03.156062  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:03.156082  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:03.203476  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:03.203507  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:03.238784  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:03.238816  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:05.769673  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:05.780027  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:05.780106  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:05.812110  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:05.812134  984872 cri.go:89] found id: ""
	I1210 07:24:05.812144  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:05.812201  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:05.816015  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:05.816094  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:05.842118  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:05.842139  984872 cri.go:89] found id: ""
	I1210 07:24:05.842148  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:05.842204  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:05.845800  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:05.845877  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:05.871993  984872 cri.go:89] found id: ""
	I1210 07:24:05.872021  984872 logs.go:282] 0 containers: []
	W1210 07:24:05.872030  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:05.872037  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:05.872105  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:05.897323  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:05.897345  984872 cri.go:89] found id: ""
	I1210 07:24:05.897353  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:05.897409  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:05.901244  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:05.901327  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:05.928582  984872 cri.go:89] found id: ""
	I1210 07:24:05.928605  984872 logs.go:282] 0 containers: []
	W1210 07:24:05.928613  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:05.928619  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:05.928684  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:05.954151  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:05.954170  984872 cri.go:89] found id: ""
	I1210 07:24:05.954178  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:05.954232  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:05.958135  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:05.958210  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:05.982517  984872 cri.go:89] found id: ""
	I1210 07:24:05.982542  984872 logs.go:282] 0 containers: []
	W1210 07:24:05.982551  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:05.982558  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:05.982621  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:06.011231  984872 cri.go:89] found id: ""
	I1210 07:24:06.011255  984872 logs.go:282] 0 containers: []
	W1210 07:24:06.011264  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:06.011277  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:06.011290  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:06.046342  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:06.046374  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:06.077542  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:06.077575  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:06.105724  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:06.105757  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:06.141257  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:06.141310  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:06.171840  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:06.171874  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:06.203432  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:06.203460  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:06.266503  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:06.266541  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:06.283827  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:06.283858  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:06.373457  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:08.874375  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:08.885427  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:08.885503  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:08.912124  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:08.912151  984872 cri.go:89] found id: ""
	I1210 07:24:08.912160  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:08.912218  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:08.916313  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:08.916394  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:08.942070  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:08.942096  984872 cri.go:89] found id: ""
	I1210 07:24:08.942105  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:08.942162  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:08.945983  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:08.946065  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:08.972208  984872 cri.go:89] found id: ""
	I1210 07:24:08.972288  984872 logs.go:282] 0 containers: []
	W1210 07:24:08.972309  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:08.972317  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:08.972399  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:08.998247  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:08.998271  984872 cri.go:89] found id: ""
	I1210 07:24:08.998280  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:08.998337  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:09.005673  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:09.005770  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:09.036482  984872 cri.go:89] found id: ""
	I1210 07:24:09.036511  984872 logs.go:282] 0 containers: []
	W1210 07:24:09.036522  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:09.036533  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:09.036604  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:09.066903  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:09.066976  984872 cri.go:89] found id: ""
	I1210 07:24:09.066999  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:09.067087  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:09.070813  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:09.070890  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:09.095419  984872 cri.go:89] found id: ""
	I1210 07:24:09.095441  984872 logs.go:282] 0 containers: []
	W1210 07:24:09.095450  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:09.095456  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:09.095523  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:09.125348  984872 cri.go:89] found id: ""
	I1210 07:24:09.125371  984872 logs.go:282] 0 containers: []
	W1210 07:24:09.125380  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:09.125394  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:09.125405  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:09.184007  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:09.184046  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:09.212782  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:09.212818  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:09.243301  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:09.243335  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:09.271596  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:09.271628  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:09.288777  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:09.288805  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:09.378162  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:09.378187  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:09.378200  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:09.419974  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:09.420009  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:09.456471  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:09.456502  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:11.988796  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:11.999095  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:11.999172  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:12.029964  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:12.029988  984872 cri.go:89] found id: ""
	I1210 07:24:12.030003  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:12.030061  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:12.033912  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:12.033985  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:12.059183  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:12.059210  984872 cri.go:89] found id: ""
	I1210 07:24:12.059220  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:12.059277  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:12.063108  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:12.063180  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:12.089223  984872 cri.go:89] found id: ""
	I1210 07:24:12.089244  984872 logs.go:282] 0 containers: []
	W1210 07:24:12.089254  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:12.089261  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:12.089323  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:12.114157  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:12.114176  984872 cri.go:89] found id: ""
	I1210 07:24:12.114185  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:12.114241  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:12.117916  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:12.117995  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:12.142720  984872 cri.go:89] found id: ""
	I1210 07:24:12.142742  984872 logs.go:282] 0 containers: []
	W1210 07:24:12.142751  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:12.142763  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:12.142825  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:12.168904  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:12.168926  984872 cri.go:89] found id: ""
	I1210 07:24:12.168934  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:12.168993  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:12.172776  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:12.172855  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:12.197091  984872 cri.go:89] found id: ""
	I1210 07:24:12.197116  984872 logs.go:282] 0 containers: []
	W1210 07:24:12.197125  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:12.197131  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:12.197208  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:12.221300  984872 cri.go:89] found id: ""
	I1210 07:24:12.221325  984872 logs.go:282] 0 containers: []
	W1210 07:24:12.221335  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:12.221391  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:12.221408  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:12.287080  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:12.287101  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:12.287114  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:12.325836  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:12.325891  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:12.368744  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:12.368783  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:12.402407  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:12.402436  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:12.461545  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:12.461581  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:12.496863  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:12.496893  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:12.530528  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:12.530565  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:12.566772  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:12.566807  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:15.084515  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:15.095840  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:15.095924  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:15.124148  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:15.124168  984872 cri.go:89] found id: ""
	I1210 07:24:15.124175  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:15.124234  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:15.128059  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:15.128142  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:15.153440  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:15.153461  984872 cri.go:89] found id: ""
	I1210 07:24:15.153469  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:15.153525  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:15.157552  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:15.157635  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:15.192065  984872 cri.go:89] found id: ""
	I1210 07:24:15.192088  984872 logs.go:282] 0 containers: []
	W1210 07:24:15.192097  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:15.192104  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:15.192164  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:15.216504  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:15.216525  984872 cri.go:89] found id: ""
	I1210 07:24:15.216533  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:15.216589  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:15.220480  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:15.220581  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:15.245683  984872 cri.go:89] found id: ""
	I1210 07:24:15.245709  984872 logs.go:282] 0 containers: []
	W1210 07:24:15.245719  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:15.245725  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:15.245786  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:15.270337  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:15.270360  984872 cri.go:89] found id: ""
	I1210 07:24:15.270369  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:15.270424  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:15.274053  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:15.274133  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:15.305361  984872 cri.go:89] found id: ""
	I1210 07:24:15.305388  984872 logs.go:282] 0 containers: []
	W1210 07:24:15.305401  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:15.305417  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:15.305498  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:15.343759  984872 cri.go:89] found id: ""
	I1210 07:24:15.343793  984872 logs.go:282] 0 containers: []
	W1210 07:24:15.343803  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:15.343817  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:15.343828  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:15.409064  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:15.409101  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:15.479057  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:15.479076  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:15.479089  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:15.526787  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:15.526816  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:15.557440  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:15.557472  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:15.586416  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:15.586446  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:15.602376  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:15.602410  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:15.628082  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:15.628111  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:15.660541  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:15.660574  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:18.189157  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:18.200177  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:18.200251  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:18.230005  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:18.230024  984872 cri.go:89] found id: ""
	I1210 07:24:18.230033  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:18.230087  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:18.233620  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:18.233737  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:18.259158  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:18.259181  984872 cri.go:89] found id: ""
	I1210 07:24:18.259190  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:18.259245  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:18.262799  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:18.262874  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:18.287734  984872 cri.go:89] found id: ""
	I1210 07:24:18.287757  984872 logs.go:282] 0 containers: []
	W1210 07:24:18.287766  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:18.287773  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:18.287834  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:18.325193  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:18.325217  984872 cri.go:89] found id: ""
	I1210 07:24:18.325225  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:18.325281  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:18.329958  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:18.330033  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:18.359988  984872 cri.go:89] found id: ""
	I1210 07:24:18.360012  984872 logs.go:282] 0 containers: []
	W1210 07:24:18.360021  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:18.360027  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:18.360089  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:18.384429  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:18.384451  984872 cri.go:89] found id: ""
	I1210 07:24:18.384459  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:18.384515  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:18.388155  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:18.388228  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:18.416659  984872 cri.go:89] found id: ""
	I1210 07:24:18.416737  984872 logs.go:282] 0 containers: []
	W1210 07:24:18.416761  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:18.416783  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:18.416863  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:18.442762  984872 cri.go:89] found id: ""
	I1210 07:24:18.442805  984872 logs.go:282] 0 containers: []
	W1210 07:24:18.442816  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:18.442832  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:18.442846  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:18.475980  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:18.476010  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:18.507621  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:18.507651  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:18.540377  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:18.540407  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:18.570921  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:18.570956  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:18.611699  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:18.611727  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:18.628548  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:18.628579  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:18.694135  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:18.694155  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:18.694169  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:18.721380  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:18.721410  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:21.283364  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:21.298637  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:21.298711  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:21.365463  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:21.365485  984872 cri.go:89] found id: ""
	I1210 07:24:21.365494  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:21.365549  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:21.376682  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:21.376764  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:21.420349  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:21.420375  984872 cri.go:89] found id: ""
	I1210 07:24:21.420385  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:21.420444  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:21.425074  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:21.425153  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:21.467840  984872 cri.go:89] found id: ""
	I1210 07:24:21.467867  984872 logs.go:282] 0 containers: []
	W1210 07:24:21.467877  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:21.467883  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:21.467958  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:21.496488  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:21.496513  984872 cri.go:89] found id: ""
	I1210 07:24:21.496523  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:21.496579  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:21.501539  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:21.501623  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:21.551652  984872 cri.go:89] found id: ""
	I1210 07:24:21.551683  984872 logs.go:282] 0 containers: []
	W1210 07:24:21.551693  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:21.551699  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:21.551759  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:21.584356  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:21.584385  984872 cri.go:89] found id: ""
	I1210 07:24:21.584393  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:21.584451  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:21.589489  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:21.589572  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:21.628481  984872 cri.go:89] found id: ""
	I1210 07:24:21.628507  984872 logs.go:282] 0 containers: []
	W1210 07:24:21.628517  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:21.628523  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:21.628591  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:21.660106  984872 cri.go:89] found id: ""
	I1210 07:24:21.660133  984872 logs.go:282] 0 containers: []
	W1210 07:24:21.660143  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:21.660161  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:21.660174  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:21.740136  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:21.740160  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:21.740175  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:21.774842  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:21.774876  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:21.809966  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:21.810001  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:21.837073  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:21.837104  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:21.895593  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:21.895632  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:21.912616  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:21.912644  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:21.946540  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:21.946575  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:21.976390  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:21.976427  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:24.507025  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:24.518232  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:24.518308  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:24.548757  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:24.548781  984872 cri.go:89] found id: ""
	I1210 07:24:24.548791  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:24.548845  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:24.552891  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:24.552985  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:24.581048  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:24.581070  984872 cri.go:89] found id: ""
	I1210 07:24:24.581079  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:24.581158  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:24.586302  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:24.586405  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:24.624820  984872 cri.go:89] found id: ""
	I1210 07:24:24.624844  984872 logs.go:282] 0 containers: []
	W1210 07:24:24.624852  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:24.624883  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:24.624965  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:24.657119  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:24.657142  984872 cri.go:89] found id: ""
	I1210 07:24:24.657152  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:24.657227  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:24.662359  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:24.662456  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:24.704957  984872 cri.go:89] found id: ""
	I1210 07:24:24.704983  984872 logs.go:282] 0 containers: []
	W1210 07:24:24.704992  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:24.704999  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:24.705108  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:24.748618  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:24.748642  984872 cri.go:89] found id: ""
	I1210 07:24:24.748652  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:24.748738  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:24.752850  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:24.752973  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:24.793842  984872 cri.go:89] found id: ""
	I1210 07:24:24.793916  984872 logs.go:282] 0 containers: []
	W1210 07:24:24.793940  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:24.793960  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:24.794057  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:24.835218  984872 cri.go:89] found id: ""
	I1210 07:24:24.835281  984872 logs.go:282] 0 containers: []
	W1210 07:24:24.835313  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:24.835347  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:24.835390  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:24.889680  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:24.889714  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:24.951243  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:24.951317  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:25.020290  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:25.020328  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:25.073235  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:25.073313  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:25.156176  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:25.156260  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:25.197340  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:25.197375  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:25.249711  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:25.249741  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:25.278838  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:25.278868  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:25.348367  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:27.848897  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:27.862616  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:27.862684  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:27.893726  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:27.893750  984872 cri.go:89] found id: ""
	I1210 07:24:27.893758  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:27.893822  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:27.900657  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:27.900734  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:27.940640  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:27.940669  984872 cri.go:89] found id: ""
	I1210 07:24:27.940679  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:27.940736  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:27.945020  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:27.945112  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:27.989330  984872 cri.go:89] found id: ""
	I1210 07:24:27.989352  984872 logs.go:282] 0 containers: []
	W1210 07:24:27.989361  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:27.989367  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:27.989420  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:28.034070  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:28.034094  984872 cri.go:89] found id: ""
	I1210 07:24:28.034104  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:28.034168  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:28.039105  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:28.039190  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:28.116909  984872 cri.go:89] found id: ""
	I1210 07:24:28.116935  984872 logs.go:282] 0 containers: []
	W1210 07:24:28.116944  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:28.116950  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:28.117065  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:28.160562  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:28.160587  984872 cri.go:89] found id: ""
	I1210 07:24:28.160596  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:28.160660  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:28.165067  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:28.165175  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:28.195971  984872 cri.go:89] found id: ""
	I1210 07:24:28.195998  984872 logs.go:282] 0 containers: []
	W1210 07:24:28.196007  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:28.196013  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:28.196077  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:28.240838  984872 cri.go:89] found id: ""
	I1210 07:24:28.240863  984872 logs.go:282] 0 containers: []
	W1210 07:24:28.240873  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:28.240905  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:28.240920  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:28.274813  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:28.274847  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:28.318259  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:28.318297  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:28.373557  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:28.373588  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:28.440224  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:28.440263  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:28.458588  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:28.458614  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:28.548627  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:28.548645  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:28.548659  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:28.598065  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:28.598100  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:28.662742  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:28.662820  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:31.216295  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:31.226650  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:31.226743  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:31.251671  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:31.251742  984872 cri.go:89] found id: ""
	I1210 07:24:31.251756  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:31.251821  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:31.255692  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:31.255770  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:31.280599  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:31.280622  984872 cri.go:89] found id: ""
	I1210 07:24:31.280630  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:31.280718  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:31.284549  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:31.284628  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:31.308777  984872 cri.go:89] found id: ""
	I1210 07:24:31.308802  984872 logs.go:282] 0 containers: []
	W1210 07:24:31.308810  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:31.308817  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:31.308878  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:31.336572  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:31.336594  984872 cri.go:89] found id: ""
	I1210 07:24:31.336603  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:31.336662  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:31.340489  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:31.340568  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:31.365948  984872 cri.go:89] found id: ""
	I1210 07:24:31.365975  984872 logs.go:282] 0 containers: []
	W1210 07:24:31.365984  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:31.365990  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:31.366052  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:31.395986  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:31.396058  984872 cri.go:89] found id: ""
	I1210 07:24:31.396073  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:31.396132  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:31.400018  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:31.400093  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:31.425674  984872 cri.go:89] found id: ""
	I1210 07:24:31.425751  984872 logs.go:282] 0 containers: []
	W1210 07:24:31.425773  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:31.425793  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:31.425872  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:31.457310  984872 cri.go:89] found id: ""
	I1210 07:24:31.457376  984872 logs.go:282] 0 containers: []
	W1210 07:24:31.457402  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:31.457430  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:31.457457  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:31.523588  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:31.523663  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:31.562932  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:31.563017  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:31.626137  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:31.626209  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:31.687907  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:31.687971  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:31.710680  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:31.710751  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:31.816465  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:31.816527  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:31.816559  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:31.890156  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:31.890230  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:31.932282  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:31.932308  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:34.467023  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:34.478003  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:34.478076  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:34.504293  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:34.504316  984872 cri.go:89] found id: ""
	I1210 07:24:34.504325  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:34.504380  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:34.508174  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:34.508258  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:34.535043  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:34.535065  984872 cri.go:89] found id: ""
	I1210 07:24:34.535073  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:34.535131  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:34.539026  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:34.539106  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:34.569899  984872 cri.go:89] found id: ""
	I1210 07:24:34.569921  984872 logs.go:282] 0 containers: []
	W1210 07:24:34.569930  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:34.569936  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:34.570002  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:34.595451  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:34.595486  984872 cri.go:89] found id: ""
	I1210 07:24:34.595501  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:34.595579  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:34.599482  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:34.599556  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:34.629704  984872 cri.go:89] found id: ""
	I1210 07:24:34.629728  984872 logs.go:282] 0 containers: []
	W1210 07:24:34.629737  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:34.629743  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:34.629803  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:34.656430  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:34.656450  984872 cri.go:89] found id: ""
	I1210 07:24:34.656458  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:34.656513  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:34.660253  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:34.660325  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:34.684783  984872 cri.go:89] found id: ""
	I1210 07:24:34.684848  984872 logs.go:282] 0 containers: []
	W1210 07:24:34.684872  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:34.684891  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:34.684967  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:34.710325  984872 cri.go:89] found id: ""
	I1210 07:24:34.710390  984872 logs.go:282] 0 containers: []
	W1210 07:24:34.710414  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:34.710442  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:34.710492  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:34.737774  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:34.737859  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:34.754640  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:34.754715  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:34.785983  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:34.786014  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:34.819995  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:34.820023  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:34.853609  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:34.853698  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:34.918055  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:34.918095  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:34.986576  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:34.986594  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:34.986609  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:35.024650  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:35.024687  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:37.564619  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:37.575172  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:37.575248  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:37.600308  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:37.600335  984872 cri.go:89] found id: ""
	I1210 07:24:37.600344  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:37.600403  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:37.604308  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:37.604383  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:37.633282  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:37.633306  984872 cri.go:89] found id: ""
	I1210 07:24:37.633315  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:37.633372  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:37.637185  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:37.637259  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:37.663214  984872 cri.go:89] found id: ""
	I1210 07:24:37.663243  984872 logs.go:282] 0 containers: []
	W1210 07:24:37.663252  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:37.663259  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:37.663342  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:37.689128  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:37.689150  984872 cri.go:89] found id: ""
	I1210 07:24:37.689159  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:37.689217  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:37.693036  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:37.693113  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:37.718276  984872 cri.go:89] found id: ""
	I1210 07:24:37.718302  984872 logs.go:282] 0 containers: []
	W1210 07:24:37.718311  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:37.718317  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:37.718379  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:37.743524  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:37.743546  984872 cri.go:89] found id: ""
	I1210 07:24:37.743555  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:37.743611  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:37.747207  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:37.747276  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:37.771202  984872 cri.go:89] found id: ""
	I1210 07:24:37.771225  984872 logs.go:282] 0 containers: []
	W1210 07:24:37.771235  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:37.771241  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:37.771303  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:37.810967  984872 cri.go:89] found id: ""
	I1210 07:24:37.810991  984872 logs.go:282] 0 containers: []
	W1210 07:24:37.810999  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:37.811013  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:37.811024  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:37.874942  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:37.874981  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:37.891599  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:37.891629  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:37.928839  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:37.928873  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:37.955123  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:37.955154  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:37.987658  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:37.987692  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:38.019103  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:38.019147  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:38.088768  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:38.088792  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:38.088807  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:38.127310  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:38.127345  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:40.670649  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:40.681333  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:40.681454  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:40.706119  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:40.706145  984872 cri.go:89] found id: ""
	I1210 07:24:40.706153  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:40.706217  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:40.710164  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:40.710259  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:40.735429  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:40.735494  984872 cri.go:89] found id: ""
	I1210 07:24:40.735509  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:40.735576  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:40.739192  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:40.739264  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:40.768330  984872 cri.go:89] found id: ""
	I1210 07:24:40.768360  984872 logs.go:282] 0 containers: []
	W1210 07:24:40.768369  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:40.768376  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:40.768436  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:40.794732  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:40.794758  984872 cri.go:89] found id: ""
	I1210 07:24:40.794768  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:40.794825  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:40.799140  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:40.799215  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:40.829347  984872 cri.go:89] found id: ""
	I1210 07:24:40.829422  984872 logs.go:282] 0 containers: []
	W1210 07:24:40.829445  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:40.829465  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:40.829550  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:40.860190  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:40.860215  984872 cri.go:89] found id: ""
	I1210 07:24:40.860223  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:40.860279  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:40.864104  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:40.864183  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:40.889904  984872 cri.go:89] found id: ""
	I1210 07:24:40.889970  984872 logs.go:282] 0 containers: []
	W1210 07:24:40.889994  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:40.890019  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:40.890097  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:40.920649  984872 cri.go:89] found id: ""
	I1210 07:24:40.920677  984872 logs.go:282] 0 containers: []
	W1210 07:24:40.920687  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:40.920726  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:40.920746  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:40.979013  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:40.979053  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:40.995915  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:40.995950  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:41.029042  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:41.029073  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:41.060008  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:41.060039  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:41.088375  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:41.088406  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:41.157399  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:41.157422  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:41.157437  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:41.191130  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:41.191164  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:41.222886  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:41.222918  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:43.753697  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:43.763957  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:43.764039  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:43.789236  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:43.789260  984872 cri.go:89] found id: ""
	I1210 07:24:43.789269  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:43.789326  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:43.793219  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:43.793302  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:43.826631  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:43.826657  984872 cri.go:89] found id: ""
	I1210 07:24:43.826665  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:43.826750  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:43.831691  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:43.831763  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:43.861034  984872 cri.go:89] found id: ""
	I1210 07:24:43.861061  984872 logs.go:282] 0 containers: []
	W1210 07:24:43.861071  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:43.861077  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:43.861142  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:43.892054  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:43.892084  984872 cri.go:89] found id: ""
	I1210 07:24:43.892094  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:43.892175  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:43.896060  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:43.896136  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:43.920696  984872 cri.go:89] found id: ""
	I1210 07:24:43.920723  984872 logs.go:282] 0 containers: []
	W1210 07:24:43.920732  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:43.920739  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:43.920799  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:43.946566  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:43.946589  984872 cri.go:89] found id: ""
	I1210 07:24:43.946598  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:43.946661  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:43.950631  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:43.950712  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:43.980646  984872 cri.go:89] found id: ""
	I1210 07:24:43.980678  984872 logs.go:282] 0 containers: []
	W1210 07:24:43.980688  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:43.980695  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:43.980759  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:44.012669  984872 cri.go:89] found id: ""
	I1210 07:24:44.012745  984872 logs.go:282] 0 containers: []
	W1210 07:24:44.012758  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:44.012772  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:44.012783  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:44.029957  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:44.029986  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:44.065304  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:44.065337  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:44.097201  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:44.097240  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:44.126698  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:44.126735  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:44.159047  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:44.159079  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:44.217885  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:44.217921  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:44.289018  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:44.289052  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:44.289065  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:44.316514  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:44.316545  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:46.850600  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:46.860939  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:46.861051  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:46.887694  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:46.887718  984872 cri.go:89] found id: ""
	I1210 07:24:46.887727  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:46.887783  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:46.891589  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:46.891665  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:46.915768  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:46.915788  984872 cri.go:89] found id: ""
	I1210 07:24:46.915796  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:46.915857  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:46.919702  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:46.919782  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:46.944754  984872 cri.go:89] found id: ""
	I1210 07:24:46.944780  984872 logs.go:282] 0 containers: []
	W1210 07:24:46.944788  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:46.944795  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:46.944854  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:46.969839  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:46.969860  984872 cri.go:89] found id: ""
	I1210 07:24:46.969868  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:46.969922  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:46.973679  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:46.973772  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:46.999552  984872 cri.go:89] found id: ""
	I1210 07:24:46.999584  984872 logs.go:282] 0 containers: []
	W1210 07:24:46.999594  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:46.999607  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:46.999674  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:47.027023  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:47.027043  984872 cri.go:89] found id: ""
	I1210 07:24:47.027058  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:47.027124  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:47.030749  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:47.030820  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:47.055575  984872 cri.go:89] found id: ""
	I1210 07:24:47.055880  984872 logs.go:282] 0 containers: []
	W1210 07:24:47.055925  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:47.055942  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:47.056007  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:47.080481  984872 cri.go:89] found id: ""
	I1210 07:24:47.080504  984872 logs.go:282] 0 containers: []
	W1210 07:24:47.080512  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:47.080527  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:47.080539  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:47.117713  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:47.117743  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:47.152955  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:47.152989  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:47.184191  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:47.184219  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:47.241998  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:47.242034  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:47.258719  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:47.258749  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:47.326327  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:47.326352  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:47.326367  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:47.360407  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:47.360442  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:47.388115  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:47.388143  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:49.918019  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:49.928450  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:49.928565  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:49.956511  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:49.956533  984872 cri.go:89] found id: ""
	I1210 07:24:49.956542  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:49.956599  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:49.960553  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:49.960645  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:49.986548  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:49.986572  984872 cri.go:89] found id: ""
	I1210 07:24:49.986581  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:49.986657  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:49.990637  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:49.990720  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:50.025300  984872 cri.go:89] found id: ""
	I1210 07:24:50.025335  984872 logs.go:282] 0 containers: []
	W1210 07:24:50.025345  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:50.025352  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:50.025414  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:50.053049  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:50.053122  984872 cri.go:89] found id: ""
	I1210 07:24:50.053151  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:50.053240  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:50.057296  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:50.057373  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:50.089384  984872 cri.go:89] found id: ""
	I1210 07:24:50.089411  984872 logs.go:282] 0 containers: []
	W1210 07:24:50.089422  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:50.089429  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:50.089498  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:50.116874  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:50.116945  984872 cri.go:89] found id: ""
	I1210 07:24:50.116967  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:50.117064  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:50.120845  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:50.120918  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:50.156636  984872 cri.go:89] found id: ""
	I1210 07:24:50.156662  984872 logs.go:282] 0 containers: []
	W1210 07:24:50.156671  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:50.156679  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:50.156742  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:50.183300  984872 cri.go:89] found id: ""
	I1210 07:24:50.183330  984872 logs.go:282] 0 containers: []
	W1210 07:24:50.183339  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:50.183353  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:50.183364  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:50.220641  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:50.220674  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:50.286164  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:50.286186  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:50.286201  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:50.319992  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:50.320029  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:50.362218  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:50.362251  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:50.427694  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:50.427737  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:50.445480  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:50.445512  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:50.491624  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:50.491660  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:50.530204  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:50.530248  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:53.069447  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:53.080045  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:53.080120  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:53.106452  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:53.106510  984872 cri.go:89] found id: ""
	I1210 07:24:53.106544  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:53.106625  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:53.110460  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:53.110570  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:53.135327  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:53.135402  984872 cri.go:89] found id: ""
	I1210 07:24:53.135426  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:53.135513  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:53.142568  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:53.142687  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:53.169003  984872 cri.go:89] found id: ""
	I1210 07:24:53.169081  984872 logs.go:282] 0 containers: []
	W1210 07:24:53.169106  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:53.169127  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:53.169240  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:53.197154  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:53.197176  984872 cri.go:89] found id: ""
	I1210 07:24:53.197185  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:53.197243  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:53.201063  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:53.201134  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:53.226043  984872 cri.go:89] found id: ""
	I1210 07:24:53.226064  984872 logs.go:282] 0 containers: []
	W1210 07:24:53.226073  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:53.226079  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:53.226140  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:53.252601  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:53.252623  984872 cri.go:89] found id: ""
	I1210 07:24:53.252642  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:53.252701  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:53.256391  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:53.256460  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:53.280925  984872 cri.go:89] found id: ""
	I1210 07:24:53.280953  984872 logs.go:282] 0 containers: []
	W1210 07:24:53.280963  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:53.280969  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:53.281031  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:53.306117  984872 cri.go:89] found id: ""
	I1210 07:24:53.306140  984872 logs.go:282] 0 containers: []
	W1210 07:24:53.306150  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:53.306172  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:53.306186  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:53.336879  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:53.336924  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:53.366127  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:53.366158  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:53.435999  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:53.436046  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:53.454422  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:53.454452  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:53.522914  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:53.522937  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:53.522951  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:53.567976  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:53.568008  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:53.607186  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:53.607215  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:53.644302  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:53.644337  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:56.180857  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:56.193586  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:56.193660  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:56.223760  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:56.223784  984872 cri.go:89] found id: ""
	I1210 07:24:56.223792  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:56.223862  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:56.227608  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:56.227684  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:56.254657  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:56.254680  984872 cri.go:89] found id: ""
	I1210 07:24:56.254688  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:56.254743  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:56.258423  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:56.258518  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:56.284350  984872 cri.go:89] found id: ""
	I1210 07:24:56.284373  984872 logs.go:282] 0 containers: []
	W1210 07:24:56.284388  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:56.284394  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:56.284452  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:56.312144  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:56.312166  984872 cri.go:89] found id: ""
	I1210 07:24:56.312175  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:56.312249  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:56.315986  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:56.316058  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:56.340609  984872 cri.go:89] found id: ""
	I1210 07:24:56.340633  984872 logs.go:282] 0 containers: []
	W1210 07:24:56.340642  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:56.340648  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:56.340714  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:56.365949  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:56.365972  984872 cri.go:89] found id: ""
	I1210 07:24:56.365982  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:56.366040  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:56.370036  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:56.370114  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:56.395173  984872 cri.go:89] found id: ""
	I1210 07:24:56.395199  984872 logs.go:282] 0 containers: []
	W1210 07:24:56.395208  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:56.395214  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:56.395276  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:56.424635  984872 cri.go:89] found id: ""
	I1210 07:24:56.424661  984872 logs.go:282] 0 containers: []
	W1210 07:24:56.424671  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:56.424686  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:56.424727  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:56.441900  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:56.441931  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:56.512670  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:56.512690  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:56.512705  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:56.546069  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:56.546155  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:56.578722  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:56.578793  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:56.617214  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:56.617249  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:56.647777  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:56.647804  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:56.705781  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:56.705821  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:56.739288  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:56.739324  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:59.268957  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:59.278849  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:59.278929  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:59.304406  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:59.304428  984872 cri.go:89] found id: ""
	I1210 07:24:59.304436  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:24:59.304492  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:59.308418  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:24:59.308499  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:59.333383  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:59.333407  984872 cri.go:89] found id: ""
	I1210 07:24:59.333415  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:24:59.333479  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:59.337209  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:24:59.337286  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:59.362619  984872 cri.go:89] found id: ""
	I1210 07:24:59.362648  984872 logs.go:282] 0 containers: []
	W1210 07:24:59.362657  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:24:59.362664  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:59.362725  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:59.388276  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:24:59.388298  984872 cri.go:89] found id: ""
	I1210 07:24:59.388307  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:24:59.388361  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:59.392100  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:59.392178  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:59.417507  984872 cri.go:89] found id: ""
	I1210 07:24:59.417534  984872 logs.go:282] 0 containers: []
	W1210 07:24:59.417543  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:59.417549  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:59.417620  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:59.443458  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:59.443481  984872 cri.go:89] found id: ""
	I1210 07:24:59.443490  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:24:59.443556  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:24:59.447334  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:59.447412  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:59.471975  984872 cri.go:89] found id: ""
	I1210 07:24:59.471998  984872 logs.go:282] 0 containers: []
	W1210 07:24:59.472006  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:59.472013  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:59.472073  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:59.501522  984872 cri.go:89] found id: ""
	I1210 07:24:59.501544  984872 logs.go:282] 0 containers: []
	W1210 07:24:59.501552  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:59.501567  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:59.501578  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:59.518020  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:59.518089  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:59.599220  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:59.599284  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:24:59.599313  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:24:59.650103  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:24:59.650133  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:24:59.681747  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:24:59.681779  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:24:59.710581  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:24:59.710619  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:59.740380  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:59.740408  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:59.798062  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:24:59.798101  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:24:59.837744  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:24:59.837774  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:02.366761  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:02.377342  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:02.377418  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:02.405576  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:02.405597  984872 cri.go:89] found id: ""
	I1210 07:25:02.405606  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:02.405668  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:02.409318  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:02.409399  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:02.436672  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:02.436697  984872 cri.go:89] found id: ""
	I1210 07:25:02.436705  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:02.436765  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:02.440515  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:02.440591  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:02.464537  984872 cri.go:89] found id: ""
	I1210 07:25:02.464560  984872 logs.go:282] 0 containers: []
	W1210 07:25:02.464568  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:02.464574  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:02.464637  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:02.493859  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:02.493885  984872 cri.go:89] found id: ""
	I1210 07:25:02.493894  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:02.493985  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:02.497445  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:02.497521  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:02.523447  984872 cri.go:89] found id: ""
	I1210 07:25:02.523473  984872 logs.go:282] 0 containers: []
	W1210 07:25:02.523482  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:02.523488  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:02.523556  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:02.554060  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:02.554083  984872 cri.go:89] found id: ""
	I1210 07:25:02.554091  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:02.554149  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:02.558964  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:02.559073  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:02.590255  984872 cri.go:89] found id: ""
	I1210 07:25:02.590281  984872 logs.go:282] 0 containers: []
	W1210 07:25:02.590292  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:02.590327  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:02.590408  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:02.620851  984872 cri.go:89] found id: ""
	I1210 07:25:02.620879  984872 logs.go:282] 0 containers: []
	W1210 07:25:02.620888  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:02.620931  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:02.620950  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:02.665993  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:02.666021  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:02.724742  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:02.724780  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:02.741373  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:02.741407  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:02.805980  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:02.806049  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:02.806078  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:02.839942  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:02.839974  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:02.870033  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:02.870064  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:02.901641  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:02.901673  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:02.939094  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:02.939127  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:05.469009  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:05.479844  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:05.479930  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:05.513155  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:05.513174  984872 cri.go:89] found id: ""
	I1210 07:25:05.513183  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:05.513248  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:05.517901  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:05.517989  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:05.561170  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:05.561189  984872 cri.go:89] found id: ""
	I1210 07:25:05.561198  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:05.561255  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:05.565501  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:05.565602  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:05.597318  984872 cri.go:89] found id: ""
	I1210 07:25:05.597341  984872 logs.go:282] 0 containers: []
	W1210 07:25:05.597350  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:05.597356  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:05.597416  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:05.625251  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:05.625325  984872 cri.go:89] found id: ""
	I1210 07:25:05.625348  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:05.625431  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:05.630775  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:05.630902  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:05.656800  984872 cri.go:89] found id: ""
	I1210 07:25:05.656827  984872 logs.go:282] 0 containers: []
	W1210 07:25:05.656837  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:05.656842  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:05.656903  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:05.683263  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:05.683286  984872 cri.go:89] found id: ""
	I1210 07:25:05.683294  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:05.683350  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:05.687158  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:05.687233  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:05.714769  984872 cri.go:89] found id: ""
	I1210 07:25:05.714795  984872 logs.go:282] 0 containers: []
	W1210 07:25:05.714805  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:05.714812  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:05.714877  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:05.742062  984872 cri.go:89] found id: ""
	I1210 07:25:05.742087  984872 logs.go:282] 0 containers: []
	W1210 07:25:05.742097  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:05.742111  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:05.742123  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:05.759300  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:05.759327  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:05.824054  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:05.824083  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:05.824097  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:05.854922  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:05.854955  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:05.882264  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:05.882297  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:05.925173  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:05.925201  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:05.986520  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:05.986558  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:06.021868  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:06.021909  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:06.055741  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:06.055777  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:08.585742  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:08.603516  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:08.603591  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:08.660028  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:08.660048  984872 cri.go:89] found id: ""
	I1210 07:25:08.660056  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:08.660113  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:08.673060  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:08.673163  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:08.707920  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:08.707948  984872 cri.go:89] found id: ""
	I1210 07:25:08.707957  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:08.708022  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:08.712604  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:08.712689  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:08.746806  984872 cri.go:89] found id: ""
	I1210 07:25:08.746829  984872 logs.go:282] 0 containers: []
	W1210 07:25:08.746845  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:08.746851  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:08.746920  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:08.790332  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:08.790360  984872 cri.go:89] found id: ""
	I1210 07:25:08.790374  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:08.790441  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:08.797015  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:08.797099  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:08.835184  984872 cri.go:89] found id: ""
	I1210 07:25:08.835211  984872 logs.go:282] 0 containers: []
	W1210 07:25:08.835220  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:08.835226  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:08.835285  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:08.876660  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:08.876681  984872 cri.go:89] found id: ""
	I1210 07:25:08.876698  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:08.876759  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:08.881249  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:08.881379  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:08.919690  984872 cri.go:89] found id: ""
	I1210 07:25:08.919753  984872 logs.go:282] 0 containers: []
	W1210 07:25:08.919785  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:08.919811  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:08.919933  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:08.952206  984872 cri.go:89] found id: ""
	I1210 07:25:08.952286  984872 logs.go:282] 0 containers: []
	W1210 07:25:08.952313  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:08.952347  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:08.952381  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:09.023336  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:09.023447  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:09.042644  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:09.042787  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:09.117102  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:09.117125  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:09.117139  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:09.162289  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:09.162359  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:09.191788  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:09.191822  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:09.226552  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:09.226585  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:09.263296  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:09.263329  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:09.295274  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:09.295310  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:11.846686  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:11.857986  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:11.858059  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:11.887322  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:11.887341  984872 cri.go:89] found id: ""
	I1210 07:25:11.887350  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:11.887405  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:11.891943  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:11.892021  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:11.927221  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:11.927243  984872 cri.go:89] found id: ""
	I1210 07:25:11.927252  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:11.927308  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:11.932128  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:11.932204  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:11.971483  984872 cri.go:89] found id: ""
	I1210 07:25:11.971518  984872 logs.go:282] 0 containers: []
	W1210 07:25:11.971531  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:11.971542  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:11.971638  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:12.011457  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:12.011482  984872 cri.go:89] found id: ""
	I1210 07:25:12.011491  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:12.011581  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:12.016340  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:12.016456  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:12.054034  984872 cri.go:89] found id: ""
	I1210 07:25:12.054055  984872 logs.go:282] 0 containers: []
	W1210 07:25:12.054064  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:12.054070  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:12.054177  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:12.090975  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:12.090999  984872 cri.go:89] found id: ""
	I1210 07:25:12.091008  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:12.091094  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:12.095249  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:12.095351  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:12.128290  984872 cri.go:89] found id: ""
	I1210 07:25:12.128314  984872 logs.go:282] 0 containers: []
	W1210 07:25:12.128323  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:12.128329  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:12.128433  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:12.165027  984872 cri.go:89] found id: ""
	I1210 07:25:12.165054  984872 logs.go:282] 0 containers: []
	W1210 07:25:12.165063  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:12.165109  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:12.165128  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:12.209505  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:12.209588  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:12.238493  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:12.238519  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:12.272473  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:12.272551  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:12.341735  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:12.341820  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:12.442170  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:12.442238  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:12.460739  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:12.460882  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:12.549653  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:12.549725  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:12.549754  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:12.617392  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:12.617469  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:15.165371  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:15.176312  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:15.176387  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:15.203576  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:15.203599  984872 cri.go:89] found id: ""
	I1210 07:25:15.203608  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:15.203676  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:15.207900  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:15.207982  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:15.243784  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:15.243805  984872 cri.go:89] found id: ""
	I1210 07:25:15.243813  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:15.243873  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:15.248315  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:15.248394  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:15.310710  984872 cri.go:89] found id: ""
	I1210 07:25:15.310738  984872 logs.go:282] 0 containers: []
	W1210 07:25:15.310748  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:15.310754  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:15.310816  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:15.355799  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:15.355818  984872 cri.go:89] found id: ""
	I1210 07:25:15.355825  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:15.355878  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:15.359678  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:15.359749  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:15.394667  984872 cri.go:89] found id: ""
	I1210 07:25:15.394693  984872 logs.go:282] 0 containers: []
	W1210 07:25:15.394702  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:15.394709  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:15.394768  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:15.430862  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:15.430885  984872 cri.go:89] found id: ""
	I1210 07:25:15.430893  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:15.430949  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:15.435050  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:15.435119  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:15.467488  984872 cri.go:89] found id: ""
	I1210 07:25:15.467516  984872 logs.go:282] 0 containers: []
	W1210 07:25:15.467526  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:15.467536  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:15.467597  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:15.503590  984872 cri.go:89] found id: ""
	I1210 07:25:15.503618  984872 logs.go:282] 0 containers: []
	W1210 07:25:15.503628  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:15.503642  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:15.503653  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:15.568724  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:15.568760  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:15.592235  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:15.592265  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:15.625997  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:15.626033  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:15.670746  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:15.670778  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:15.763221  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:15.763241  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:15.763253  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:15.827397  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:15.827432  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:15.868485  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:15.868525  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:15.914645  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:15.914678  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:18.451956  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:18.462199  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:18.462274  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:18.487928  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:18.487953  984872 cri.go:89] found id: ""
	I1210 07:25:18.487969  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:18.488032  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:18.491744  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:18.491817  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:18.516679  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:18.516699  984872 cri.go:89] found id: ""
	I1210 07:25:18.516707  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:18.516762  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:18.520576  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:18.520654  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:18.546198  984872 cri.go:89] found id: ""
	I1210 07:25:18.546226  984872 logs.go:282] 0 containers: []
	W1210 07:25:18.546235  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:18.546242  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:18.546303  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:18.570909  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:18.570928  984872 cri.go:89] found id: ""
	I1210 07:25:18.570936  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:18.570999  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:18.574975  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:18.575084  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:18.604281  984872 cri.go:89] found id: ""
	I1210 07:25:18.604302  984872 logs.go:282] 0 containers: []
	W1210 07:25:18.604312  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:18.604318  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:18.604375  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:18.634343  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:18.634413  984872 cri.go:89] found id: ""
	I1210 07:25:18.634437  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:18.634557  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:18.638273  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:18.638344  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:18.663016  984872 cri.go:89] found id: ""
	I1210 07:25:18.663044  984872 logs.go:282] 0 containers: []
	W1210 07:25:18.663053  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:18.663060  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:18.663124  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:18.687681  984872 cri.go:89] found id: ""
	I1210 07:25:18.687703  984872 logs.go:282] 0 containers: []
	W1210 07:25:18.687712  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:18.687728  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:18.687738  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:18.745802  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:18.745838  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:18.782956  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:18.782989  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:18.819577  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:18.819610  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:18.849374  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:18.849412  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:18.880959  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:18.881044  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:18.900523  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:18.900606  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:19.006563  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:19.006640  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:19.006670  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:19.094346  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:19.094421  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:21.665753  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:21.676001  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:21.676076  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:21.702360  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:21.702385  984872 cri.go:89] found id: ""
	I1210 07:25:21.702394  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:21.702450  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:21.706032  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:21.706101  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:21.732977  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:21.733000  984872 cri.go:89] found id: ""
	I1210 07:25:21.733008  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:21.733062  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:21.736594  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:21.736668  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:21.761527  984872 cri.go:89] found id: ""
	I1210 07:25:21.761554  984872 logs.go:282] 0 containers: []
	W1210 07:25:21.761564  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:21.761570  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:21.761628  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:21.786807  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:21.786832  984872 cri.go:89] found id: ""
	I1210 07:25:21.786842  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:21.786901  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:21.790579  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:21.790654  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:21.823527  984872 cri.go:89] found id: ""
	I1210 07:25:21.823550  984872 logs.go:282] 0 containers: []
	W1210 07:25:21.823558  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:21.823564  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:21.823623  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:21.848415  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:21.848438  984872 cri.go:89] found id: ""
	I1210 07:25:21.848447  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:21.848502  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:21.852242  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:21.852319  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:21.877897  984872 cri.go:89] found id: ""
	I1210 07:25:21.877919  984872 logs.go:282] 0 containers: []
	W1210 07:25:21.877927  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:21.877934  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:21.878003  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:21.903620  984872 cri.go:89] found id: ""
	I1210 07:25:21.903698  984872 logs.go:282] 0 containers: []
	W1210 07:25:21.903728  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:21.903750  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:21.903762  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:21.962129  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:21.962165  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:22.031296  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:22.031358  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:22.031388  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:22.081731  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:22.081808  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:22.117754  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:22.117833  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:22.164548  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:22.164584  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:22.194428  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:22.194531  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:22.211312  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:22.211342  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:22.248245  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:22.248276  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:24.781726  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:24.794058  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:24.794132  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:24.819441  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:24.819464  984872 cri.go:89] found id: ""
	I1210 07:25:24.819472  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:24.819528  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:24.823268  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:24.823347  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:24.849650  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:24.849678  984872 cri.go:89] found id: ""
	I1210 07:25:24.849687  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:24.849743  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:24.853610  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:24.853688  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:24.878764  984872 cri.go:89] found id: ""
	I1210 07:25:24.878788  984872 logs.go:282] 0 containers: []
	W1210 07:25:24.878797  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:24.878803  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:24.878867  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:24.904351  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:24.904372  984872 cri.go:89] found id: ""
	I1210 07:25:24.904381  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:24.904438  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:24.908288  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:24.908364  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:24.934382  984872 cri.go:89] found id: ""
	I1210 07:25:24.934404  984872 logs.go:282] 0 containers: []
	W1210 07:25:24.934412  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:24.934419  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:24.934511  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:24.960665  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:24.960699  984872 cri.go:89] found id: ""
	I1210 07:25:24.960707  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:24.960766  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:24.964709  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:24.964815  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:24.993771  984872 cri.go:89] found id: ""
	I1210 07:25:24.993799  984872 logs.go:282] 0 containers: []
	W1210 07:25:24.993809  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:24.993815  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:24.993905  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:25.046030  984872 cri.go:89] found id: ""
	I1210 07:25:25.046089  984872 logs.go:282] 0 containers: []
	W1210 07:25:25.046136  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:25.046158  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:25.046177  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:25.072180  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:25.072216  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:25.158500  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:25.158523  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:25.158538  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:25.197290  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:25.197329  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:25.225038  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:25.225125  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:25.263771  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:25.263805  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:25.293462  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:25.293496  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:25.322184  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:25.322214  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:25.380943  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:25.380980  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:27.914501  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:27.924621  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:27.924690  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:27.948632  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:27.948652  984872 cri.go:89] found id: ""
	I1210 07:25:27.948661  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:27.948719  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:27.952622  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:27.952706  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:27.977784  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:27.977804  984872 cri.go:89] found id: ""
	I1210 07:25:27.977812  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:27.977866  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:27.981444  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:27.981513  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:28.018259  984872 cri.go:89] found id: ""
	I1210 07:25:28.018286  984872 logs.go:282] 0 containers: []
	W1210 07:25:28.018296  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:28.018303  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:28.018374  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:28.055701  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:28.055724  984872 cri.go:89] found id: ""
	I1210 07:25:28.055733  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:28.055797  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:28.060362  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:28.060439  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:28.089343  984872 cri.go:89] found id: ""
	I1210 07:25:28.089372  984872 logs.go:282] 0 containers: []
	W1210 07:25:28.089382  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:28.089388  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:28.089509  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:28.115999  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:28.116025  984872 cri.go:89] found id: ""
	I1210 07:25:28.116033  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:28.116093  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:28.120226  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:28.120325  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:28.151512  984872 cri.go:89] found id: ""
	I1210 07:25:28.151538  984872 logs.go:282] 0 containers: []
	W1210 07:25:28.151547  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:28.151553  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:28.151661  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:28.177136  984872 cri.go:89] found id: ""
	I1210 07:25:28.177163  984872 logs.go:282] 0 containers: []
	W1210 07:25:28.177172  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:28.177188  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:28.177200  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:28.235417  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:28.235453  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:28.253063  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:28.253147  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:28.286486  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:28.286566  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:28.320624  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:28.320668  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:28.352030  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:28.352094  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:28.419107  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:28.419129  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:28.419167  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:28.467944  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:28.467982  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:28.502928  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:28.502964  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:31.042587  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:31.055037  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:31.055126  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:31.084387  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:31.084414  984872 cri.go:89] found id: ""
	I1210 07:25:31.084423  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:31.084484  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:31.088590  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:31.088673  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:31.120767  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:31.120798  984872 cri.go:89] found id: ""
	I1210 07:25:31.120809  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:31.120872  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:31.125455  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:31.125531  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:31.159418  984872 cri.go:89] found id: ""
	I1210 07:25:31.159444  984872 logs.go:282] 0 containers: []
	W1210 07:25:31.159460  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:31.159466  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:31.159528  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:31.185345  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:31.185369  984872 cri.go:89] found id: ""
	I1210 07:25:31.185378  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:31.185436  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:31.189028  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:31.189100  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:31.217852  984872 cri.go:89] found id: ""
	I1210 07:25:31.217879  984872 logs.go:282] 0 containers: []
	W1210 07:25:31.217888  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:31.217895  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:31.217958  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:31.242460  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:31.242503  984872 cri.go:89] found id: ""
	I1210 07:25:31.242511  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:31.242568  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:31.246058  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:31.246133  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:31.271656  984872 cri.go:89] found id: ""
	I1210 07:25:31.271728  984872 logs.go:282] 0 containers: []
	W1210 07:25:31.271754  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:31.271774  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:31.271850  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:31.296186  984872 cri.go:89] found id: ""
	I1210 07:25:31.296211  984872 logs.go:282] 0 containers: []
	W1210 07:25:31.296220  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:31.296238  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:31.296250  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:31.360070  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:31.360108  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:31.393922  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:31.393974  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:31.426818  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:31.426850  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:31.456019  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:31.456051  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:31.485529  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:31.485557  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:31.502096  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:31.502131  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:31.569339  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:31.569358  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:31.569370  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:31.603742  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:31.603775  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:34.139781  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:34.151119  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:34.151194  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:34.181708  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:34.181731  984872 cri.go:89] found id: ""
	I1210 07:25:34.181740  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:34.181795  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:34.185526  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:34.185599  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:34.213000  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:34.213024  984872 cri.go:89] found id: ""
	I1210 07:25:34.213033  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:34.213088  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:34.216751  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:34.216825  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:34.241290  984872 cri.go:89] found id: ""
	I1210 07:25:34.241318  984872 logs.go:282] 0 containers: []
	W1210 07:25:34.241326  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:34.241333  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:34.241392  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:34.268260  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:34.268282  984872 cri.go:89] found id: ""
	I1210 07:25:34.268290  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:34.268347  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:34.271917  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:34.271991  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:34.297258  984872 cri.go:89] found id: ""
	I1210 07:25:34.297281  984872 logs.go:282] 0 containers: []
	W1210 07:25:34.297290  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:34.297296  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:34.297357  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:34.326101  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:34.326125  984872 cri.go:89] found id: ""
	I1210 07:25:34.326133  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:34.326227  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:34.330266  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:34.330350  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:34.355839  984872 cri.go:89] found id: ""
	I1210 07:25:34.355862  984872 logs.go:282] 0 containers: []
	W1210 07:25:34.355871  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:34.355878  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:34.355963  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:34.380610  984872 cri.go:89] found id: ""
	I1210 07:25:34.380688  984872 logs.go:282] 0 containers: []
	W1210 07:25:34.380711  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:34.380740  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:34.380760  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:34.445916  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:34.445937  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:34.445951  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:34.480235  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:34.480269  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:34.517933  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:34.517964  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:34.546180  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:34.546207  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:34.578436  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:34.578573  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:34.636860  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:34.636899  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:34.653639  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:34.653670  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:34.683786  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:34.683823  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:37.225709  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:37.235933  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:37.236007  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:37.260636  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:37.260659  984872 cri.go:89] found id: ""
	I1210 07:25:37.260668  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:37.260725  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:37.264537  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:37.264615  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:37.291089  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:37.291112  984872 cri.go:89] found id: ""
	I1210 07:25:37.291121  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:37.291176  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:37.294923  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:37.294995  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:37.319487  984872 cri.go:89] found id: ""
	I1210 07:25:37.319510  984872 logs.go:282] 0 containers: []
	W1210 07:25:37.319518  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:37.319525  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:37.319582  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:37.345986  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:37.346006  984872 cri.go:89] found id: ""
	I1210 07:25:37.346013  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:37.346066  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:37.349871  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:37.349945  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:37.375675  984872 cri.go:89] found id: ""
	I1210 07:25:37.375702  984872 logs.go:282] 0 containers: []
	W1210 07:25:37.375711  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:37.375717  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:37.375780  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:37.402030  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:37.402054  984872 cri.go:89] found id: ""
	I1210 07:25:37.402063  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:37.402119  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:37.405969  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:37.406049  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:37.432351  984872 cri.go:89] found id: ""
	I1210 07:25:37.432377  984872 logs.go:282] 0 containers: []
	W1210 07:25:37.432387  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:37.432394  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:37.432481  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:37.458269  984872 cri.go:89] found id: ""
	I1210 07:25:37.458309  984872 logs.go:282] 0 containers: []
	W1210 07:25:37.458336  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:37.458378  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:37.458396  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:37.517135  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:37.517171  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:37.535597  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:37.535628  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:37.607650  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:37.607673  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:37.607686  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:37.640411  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:37.640441  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:37.670452  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:37.670507  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:37.703335  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:37.703365  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:37.736293  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:37.736318  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:37.770602  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:37.770677  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:40.300254  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:40.311204  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:40.311279  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:40.339432  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:40.339452  984872 cri.go:89] found id: ""
	I1210 07:25:40.339460  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:40.339517  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:40.343249  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:40.343366  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:40.369601  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:40.369626  984872 cri.go:89] found id: ""
	I1210 07:25:40.369635  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:40.369693  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:40.373446  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:40.373522  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:40.403048  984872 cri.go:89] found id: ""
	I1210 07:25:40.403073  984872 logs.go:282] 0 containers: []
	W1210 07:25:40.403082  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:40.403088  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:40.403190  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:40.438249  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:40.438272  984872 cri.go:89] found id: ""
	I1210 07:25:40.438280  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:40.438343  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:40.442133  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:40.442217  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:40.467438  984872 cri.go:89] found id: ""
	I1210 07:25:40.467474  984872 logs.go:282] 0 containers: []
	W1210 07:25:40.467483  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:40.467490  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:40.467558  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:40.494546  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:40.494581  984872 cri.go:89] found id: ""
	I1210 07:25:40.494590  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:40.494660  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:40.498671  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:40.498745  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:40.523953  984872 cri.go:89] found id: ""
	I1210 07:25:40.524033  984872 logs.go:282] 0 containers: []
	W1210 07:25:40.524048  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:40.524058  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:40.524128  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:40.549053  984872 cri.go:89] found id: ""
	I1210 07:25:40.549088  984872 logs.go:282] 0 containers: []
	W1210 07:25:40.549097  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:40.549111  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:40.549122  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:40.578841  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:40.578869  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:40.648747  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:40.648770  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:40.648783  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:40.684406  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:40.684436  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:40.714900  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:40.714937  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:40.775465  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:40.775504  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:40.792904  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:40.792932  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:40.853283  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:40.853317  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:40.894181  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:40.894216  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:43.423767  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:43.434104  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:43.434172  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:43.460793  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:43.460821  984872 cri.go:89] found id: ""
	I1210 07:25:43.460830  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:43.460900  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:43.464655  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:43.464734  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:43.490688  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:43.490712  984872 cri.go:89] found id: ""
	I1210 07:25:43.490720  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:43.490775  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:43.494663  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:43.494785  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:43.520908  984872 cri.go:89] found id: ""
	I1210 07:25:43.520991  984872 logs.go:282] 0 containers: []
	W1210 07:25:43.521014  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:43.521036  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:43.521136  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:43.549025  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:43.549087  984872 cri.go:89] found id: ""
	I1210 07:25:43.549118  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:43.549201  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:43.553046  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:43.553123  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:43.577758  984872 cri.go:89] found id: ""
	I1210 07:25:43.577782  984872 logs.go:282] 0 containers: []
	W1210 07:25:43.577791  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:43.577797  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:43.577917  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:43.606494  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:43.606569  984872 cri.go:89] found id: ""
	I1210 07:25:43.606591  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:43.606674  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:43.610326  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:43.610427  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:43.635159  984872 cri.go:89] found id: ""
	I1210 07:25:43.635231  984872 logs.go:282] 0 containers: []
	W1210 07:25:43.635256  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:43.635269  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:43.635345  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:43.660158  984872 cri.go:89] found id: ""
	I1210 07:25:43.660234  984872 logs.go:282] 0 containers: []
	W1210 07:25:43.660257  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:43.660289  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:43.660308  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:43.687521  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:43.687550  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:43.747481  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:43.747515  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:43.764204  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:43.764233  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:43.849809  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:43.849841  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:43.849854  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:43.879506  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:43.879540  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:43.921623  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:43.921656  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:43.951103  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:43.951135  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:43.986843  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:43.986877  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:46.523298  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:46.534382  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:46.534449  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:46.562730  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:46.562750  984872 cri.go:89] found id: ""
	I1210 07:25:46.562758  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:46.562811  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:46.567386  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:46.567461  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:46.604833  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:46.604852  984872 cri.go:89] found id: ""
	I1210 07:25:46.604860  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:46.604916  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:46.609411  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:46.609482  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:46.646383  984872 cri.go:89] found id: ""
	I1210 07:25:46.646404  984872 logs.go:282] 0 containers: []
	W1210 07:25:46.646413  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:46.646419  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:46.646508  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:46.680800  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:46.680825  984872 cri.go:89] found id: ""
	I1210 07:25:46.680834  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:46.680889  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:46.685267  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:46.685347  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:46.730880  984872 cri.go:89] found id: ""
	I1210 07:25:46.730901  984872 logs.go:282] 0 containers: []
	W1210 07:25:46.730910  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:46.730916  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:46.731016  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:46.758695  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:46.758715  984872 cri.go:89] found id: ""
	I1210 07:25:46.758724  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:46.758789  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:46.762989  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:46.763059  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:46.809849  984872 cri.go:89] found id: ""
	I1210 07:25:46.809929  984872 logs.go:282] 0 containers: []
	W1210 07:25:46.809956  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:46.809994  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:46.810092  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:46.856655  984872 cri.go:89] found id: ""
	I1210 07:25:46.856729  984872 logs.go:282] 0 containers: []
	W1210 07:25:46.856753  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:46.856803  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:46.856833  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:46.902975  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:46.903004  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:46.921419  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:46.921445  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:47.003785  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:47.003858  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:47.003887  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:47.041811  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:47.041887  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:47.107720  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:47.107801  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:47.164190  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:47.164268  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:47.207765  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:47.207840  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:47.239792  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:47.239816  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:49.776454  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:49.786451  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:49.786547  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:49.822062  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:49.822088  984872 cri.go:89] found id: ""
	I1210 07:25:49.822096  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:49.822153  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:49.826592  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:49.826671  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:49.858966  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:49.858991  984872 cri.go:89] found id: ""
	I1210 07:25:49.859002  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:49.859058  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:49.862878  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:49.863010  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:49.889554  984872 cri.go:89] found id: ""
	I1210 07:25:49.889578  984872 logs.go:282] 0 containers: []
	W1210 07:25:49.889587  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:49.889599  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:49.889660  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:49.918061  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:49.918095  984872 cri.go:89] found id: ""
	I1210 07:25:49.918105  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:49.918163  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:49.921895  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:49.921972  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:49.948665  984872 cri.go:89] found id: ""
	I1210 07:25:49.948744  984872 logs.go:282] 0 containers: []
	W1210 07:25:49.948762  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:49.948770  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:49.948849  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:49.977164  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:49.977197  984872 cri.go:89] found id: ""
	I1210 07:25:49.977206  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:49.977272  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:49.981035  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:49.981112  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:50.011732  984872 cri.go:89] found id: ""
	I1210 07:25:50.011762  984872 logs.go:282] 0 containers: []
	W1210 07:25:50.011771  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:50.011778  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:50.011854  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:50.047140  984872 cri.go:89] found id: ""
	I1210 07:25:50.047166  984872 logs.go:282] 0 containers: []
	W1210 07:25:50.047175  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:50.047189  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:50.047200  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:50.064044  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:50.064072  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:50.109464  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:50.109501  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:50.152830  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:50.152859  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:50.185843  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:50.185914  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:50.247232  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:50.247266  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:50.311422  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:50.311493  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:50.311514  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:50.343428  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:50.343458  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:50.373009  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:50.373045  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:52.902313  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:52.912755  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:52.912838  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:52.939331  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:52.939358  984872 cri.go:89] found id: ""
	I1210 07:25:52.939367  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:52.939422  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:52.943365  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:52.943438  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:52.973737  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:52.973756  984872 cri.go:89] found id: ""
	I1210 07:25:52.973771  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:52.973829  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:52.977590  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:52.977665  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:53.004367  984872 cri.go:89] found id: ""
	I1210 07:25:53.004399  984872 logs.go:282] 0 containers: []
	W1210 07:25:53.004408  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:53.004415  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:53.004487  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:53.030857  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:53.030880  984872 cri.go:89] found id: ""
	I1210 07:25:53.030888  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:53.030945  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:53.035050  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:53.035169  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:53.061167  984872 cri.go:89] found id: ""
	I1210 07:25:53.061193  984872 logs.go:282] 0 containers: []
	W1210 07:25:53.061209  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:53.061216  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:53.061287  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:53.088082  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:53.088107  984872 cri.go:89] found id: ""
	I1210 07:25:53.088116  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:53.088197  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:53.092195  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:53.092282  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:53.127247  984872 cri.go:89] found id: ""
	I1210 07:25:53.127315  984872 logs.go:282] 0 containers: []
	W1210 07:25:53.127330  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:53.127336  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:53.127396  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:53.151933  984872 cri.go:89] found id: ""
	I1210 07:25:53.151997  984872 logs.go:282] 0 containers: []
	W1210 07:25:53.152013  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:53.152027  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:53.152039  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:53.168556  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:53.168586  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:53.231901  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:53.231971  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:53.231991  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:53.268960  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:53.268993  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:53.306073  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:53.306103  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:53.341671  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:53.341712  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:53.378375  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:53.378408  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:53.408945  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:53.408984  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:53.440902  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:53.440933  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:56.012279  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:56.023737  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:56.023811  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:56.050754  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:56.050786  984872 cri.go:89] found id: ""
	I1210 07:25:56.050795  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:56.050854  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:56.054778  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:56.054863  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:56.080670  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:56.080692  984872 cri.go:89] found id: ""
	I1210 07:25:56.080700  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:56.080760  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:56.084683  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:56.084762  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:56.111091  984872 cri.go:89] found id: ""
	I1210 07:25:56.111119  984872 logs.go:282] 0 containers: []
	W1210 07:25:56.111130  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:56.111137  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:56.111198  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:56.143406  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:56.143431  984872 cri.go:89] found id: ""
	I1210 07:25:56.143440  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:56.143495  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:56.147275  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:56.147351  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:56.172448  984872 cri.go:89] found id: ""
	I1210 07:25:56.172472  984872 logs.go:282] 0 containers: []
	W1210 07:25:56.172481  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:56.172487  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:56.172572  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:56.198557  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:56.198636  984872 cri.go:89] found id: ""
	I1210 07:25:56.198652  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:56.198709  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:56.202404  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:56.202512  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:56.229957  984872 cri.go:89] found id: ""
	I1210 07:25:56.229981  984872 logs.go:282] 0 containers: []
	W1210 07:25:56.229990  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:56.229996  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:56.230060  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:56.258142  984872 cri.go:89] found id: ""
	I1210 07:25:56.258168  984872 logs.go:282] 0 containers: []
	W1210 07:25:56.258178  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:56.258192  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:56.258204  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:56.301659  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:56.301692  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:56.336805  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:56.336839  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:56.366437  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:56.366549  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:56.429189  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:56.429224  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:56.473512  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:56.473557  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:56.503287  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:56.503317  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:56.536975  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:56.537004  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:56.554221  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:56.554251  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:56.634841  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:59.135416  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:59.147406  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:59.147481  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:59.173709  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:59.173731  984872 cri.go:89] found id: ""
	I1210 07:25:59.173739  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:25:59.173796  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:59.177599  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:25:59.177681  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:59.208915  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:59.208938  984872 cri.go:89] found id: ""
	I1210 07:25:59.209257  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:25:59.209337  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:59.214168  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:25:59.214247  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:59.245808  984872 cri.go:89] found id: ""
	I1210 07:25:59.245834  984872 logs.go:282] 0 containers: []
	W1210 07:25:59.245843  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:25:59.245849  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:59.245912  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:59.271427  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:25:59.271450  984872 cri.go:89] found id: ""
	I1210 07:25:59.271461  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:25:59.271517  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:59.275402  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:59.275519  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:59.300739  984872 cri.go:89] found id: ""
	I1210 07:25:59.300815  984872 logs.go:282] 0 containers: []
	W1210 07:25:59.300841  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:59.300860  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:59.300953  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:59.326151  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:59.326226  984872 cri.go:89] found id: ""
	I1210 07:25:59.326249  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:25:59.326340  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:25:59.330188  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:59.330307  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:59.360925  984872 cri.go:89] found id: ""
	I1210 07:25:59.361005  984872 logs.go:282] 0 containers: []
	W1210 07:25:59.361043  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:59.361065  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:59.361139  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:59.388979  984872 cri.go:89] found id: ""
	I1210 07:25:59.389047  984872 logs.go:282] 0 containers: []
	W1210 07:25:59.389071  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:59.389109  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:25:59.389135  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:25:59.418931  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:59.418964  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:59.486633  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:59.486656  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:25:59.486670  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:25:59.519864  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:25:59.519896  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:25:59.564537  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:25:59.564572  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:59.620364  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:59.620393  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:59.685736  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:59.685774  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:59.703592  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:25:59.703622  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:25:59.752372  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:25:59.752408  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:02.279308  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:02.291137  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:02.291213  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:02.318204  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:02.318224  984872 cri.go:89] found id: ""
	I1210 07:26:02.318233  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:02.318288  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:02.321989  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:02.322067  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:02.348045  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:02.348068  984872 cri.go:89] found id: ""
	I1210 07:26:02.348077  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:02.348134  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:02.352110  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:02.352187  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:02.381931  984872 cri.go:89] found id: ""
	I1210 07:26:02.381955  984872 logs.go:282] 0 containers: []
	W1210 07:26:02.381963  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:02.381970  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:02.382031  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:02.409827  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:02.409850  984872 cri.go:89] found id: ""
	I1210 07:26:02.409859  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:02.409917  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:02.413659  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:02.413766  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:02.439578  984872 cri.go:89] found id: ""
	I1210 07:26:02.439660  984872 logs.go:282] 0 containers: []
	W1210 07:26:02.439676  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:02.439684  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:02.439762  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:02.467056  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:02.467080  984872 cri.go:89] found id: ""
	I1210 07:26:02.467088  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:02.467146  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:02.470942  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:02.471020  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:02.497166  984872 cri.go:89] found id: ""
	I1210 07:26:02.497190  984872 logs.go:282] 0 containers: []
	W1210 07:26:02.497198  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:02.497208  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:02.497274  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:02.526762  984872 cri.go:89] found id: ""
	I1210 07:26:02.526792  984872 logs.go:282] 0 containers: []
	W1210 07:26:02.526801  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:02.526818  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:02.526852  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:02.568891  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:02.568931  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:02.617350  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:02.617381  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:02.676232  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:02.676269  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:02.693427  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:02.693461  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:02.726847  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:02.726881  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:02.757829  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:02.757861  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:02.788218  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:02.788250  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:02.817245  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:02.817277  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:02.878574  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:05.378873  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:05.389420  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:05.389500  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:05.415019  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:05.415042  984872 cri.go:89] found id: ""
	I1210 07:26:05.415050  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:05.415112  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:05.419066  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:05.419150  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:05.444587  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:05.444613  984872 cri.go:89] found id: ""
	I1210 07:26:05.444623  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:05.444685  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:05.448708  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:05.448784  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:05.475451  984872 cri.go:89] found id: ""
	I1210 07:26:05.475478  984872 logs.go:282] 0 containers: []
	W1210 07:26:05.475487  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:05.475494  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:05.475555  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:05.502722  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:05.502746  984872 cri.go:89] found id: ""
	I1210 07:26:05.502755  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:05.502816  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:05.506941  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:05.507016  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:05.533622  984872 cri.go:89] found id: ""
	I1210 07:26:05.533647  984872 logs.go:282] 0 containers: []
	W1210 07:26:05.533656  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:05.533662  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:05.533723  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:05.576710  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:05.576733  984872 cri.go:89] found id: ""
	I1210 07:26:05.576742  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:05.576801  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:05.581106  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:05.581187  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:05.613077  984872 cri.go:89] found id: ""
	I1210 07:26:05.613103  984872 logs.go:282] 0 containers: []
	W1210 07:26:05.613112  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:05.613118  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:05.613184  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:05.649991  984872 cri.go:89] found id: ""
	I1210 07:26:05.650017  984872 logs.go:282] 0 containers: []
	W1210 07:26:05.650026  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:05.650041  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:05.650054  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:05.686235  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:05.686273  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:05.718318  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:05.718350  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:05.749356  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:05.749390  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:05.778586  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:05.778614  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:05.836312  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:05.836348  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:05.853339  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:05.853371  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:05.919695  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:05.919717  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:05.919731  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:05.955418  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:05.955449  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:08.487726  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:08.498448  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:08.498555  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:08.530337  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:08.530361  984872 cri.go:89] found id: ""
	I1210 07:26:08.530369  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:08.530425  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:08.535238  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:08.535317  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:08.572379  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:08.572402  984872 cri.go:89] found id: ""
	I1210 07:26:08.572410  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:08.572467  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:08.576990  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:08.577069  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:08.607687  984872 cri.go:89] found id: ""
	I1210 07:26:08.607712  984872 logs.go:282] 0 containers: []
	W1210 07:26:08.607721  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:08.607727  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:08.607788  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:08.638239  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:08.638261  984872 cri.go:89] found id: ""
	I1210 07:26:08.638271  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:08.638328  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:08.642107  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:08.642187  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:08.667448  984872 cri.go:89] found id: ""
	I1210 07:26:08.667471  984872 logs.go:282] 0 containers: []
	W1210 07:26:08.667480  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:08.667492  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:08.667559  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:08.696061  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:08.696144  984872 cri.go:89] found id: ""
	I1210 07:26:08.696160  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:08.696219  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:08.700513  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:08.700618  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:08.725453  984872 cri.go:89] found id: ""
	I1210 07:26:08.725478  984872 logs.go:282] 0 containers: []
	W1210 07:26:08.725487  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:08.725494  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:08.725558  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:08.750913  984872 cri.go:89] found id: ""
	I1210 07:26:08.750937  984872 logs.go:282] 0 containers: []
	W1210 07:26:08.750947  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:08.750960  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:08.750973  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:08.767957  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:08.767991  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:08.803426  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:08.803455  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:08.845786  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:08.845820  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:08.881793  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:08.881828  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:08.912501  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:08.912536  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:08.949038  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:08.949129  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:09.009701  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:09.009740  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:09.077913  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:09.077934  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:09.077947  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:11.610603  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:11.621298  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:11.621370  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:11.646956  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:11.646979  984872 cri.go:89] found id: ""
	I1210 07:26:11.646988  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:11.647044  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:11.650870  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:11.650948  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:11.676480  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:11.676500  984872 cri.go:89] found id: ""
	I1210 07:26:11.676509  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:11.676568  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:11.680384  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:11.680457  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:11.714722  984872 cri.go:89] found id: ""
	I1210 07:26:11.714750  984872 logs.go:282] 0 containers: []
	W1210 07:26:11.714759  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:11.714765  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:11.714833  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:11.741682  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:11.741707  984872 cri.go:89] found id: ""
	I1210 07:26:11.741716  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:11.741776  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:11.745494  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:11.745573  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:11.772485  984872 cri.go:89] found id: ""
	I1210 07:26:11.772512  984872 logs.go:282] 0 containers: []
	W1210 07:26:11.772520  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:11.772527  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:11.772588  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:11.797887  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:11.797910  984872 cri.go:89] found id: ""
	I1210 07:26:11.797918  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:11.797974  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:11.801873  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:11.801956  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:11.827386  984872 cri.go:89] found id: ""
	I1210 07:26:11.827466  984872 logs.go:282] 0 containers: []
	W1210 07:26:11.827489  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:11.827510  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:11.827608  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:11.856911  984872 cri.go:89] found id: ""
	I1210 07:26:11.856952  984872 logs.go:282] 0 containers: []
	W1210 07:26:11.856961  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:11.856975  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:11.856986  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:11.873646  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:11.873678  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:11.913222  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:11.913253  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:11.940940  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:11.940968  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:11.970653  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:11.970691  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:12.028465  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:12.028504  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:12.099339  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:12.099363  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:12.099377  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:12.139393  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:12.139426  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:12.182262  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:12.182293  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:14.714605  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:14.729476  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:14.729565  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:14.767295  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:14.767314  984872 cri.go:89] found id: ""
	I1210 07:26:14.767322  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:14.767389  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:14.771879  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:14.771961  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:14.812279  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:14.812305  984872 cri.go:89] found id: ""
	I1210 07:26:14.812315  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:14.812394  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:14.817689  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:14.817775  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:14.858707  984872 cri.go:89] found id: ""
	I1210 07:26:14.858746  984872 logs.go:282] 0 containers: []
	W1210 07:26:14.858754  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:14.858766  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:14.858827  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:14.900931  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:14.900971  984872 cri.go:89] found id: ""
	I1210 07:26:14.900980  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:14.901049  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:14.905928  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:14.906036  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:14.948790  984872 cri.go:89] found id: ""
	I1210 07:26:14.948866  984872 logs.go:282] 0 containers: []
	W1210 07:26:14.948890  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:14.948910  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:14.948999  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:14.984350  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:14.984374  984872 cri.go:89] found id: ""
	I1210 07:26:14.984383  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:14.984447  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:14.988442  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:14.988513  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:15.029020  984872 cri.go:89] found id: ""
	I1210 07:26:15.029045  984872 logs.go:282] 0 containers: []
	W1210 07:26:15.029054  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:15.029060  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:15.029131  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:15.065928  984872 cri.go:89] found id: ""
	I1210 07:26:15.065957  984872 logs.go:282] 0 containers: []
	W1210 07:26:15.065966  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:15.065986  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:15.065999  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:15.104408  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:15.104443  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:15.140179  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:15.140216  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:15.176841  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:15.176872  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:15.222883  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:15.222913  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:15.252191  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:15.252220  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:15.314694  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:15.314783  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:15.332708  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:15.332796  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:15.400857  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:15.400923  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:15.400951  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:17.930212  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:17.941441  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:17.941520  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:17.972412  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:17.972438  984872 cri.go:89] found id: ""
	I1210 07:26:17.972449  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:17.972504  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:17.976861  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:17.976940  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:18.017744  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:18.017770  984872 cri.go:89] found id: ""
	I1210 07:26:18.017779  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:18.017843  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:18.022899  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:18.022980  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:18.052967  984872 cri.go:89] found id: ""
	I1210 07:26:18.052998  984872 logs.go:282] 0 containers: []
	W1210 07:26:18.053008  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:18.053015  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:18.053081  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:18.088119  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:18.088139  984872 cri.go:89] found id: ""
	I1210 07:26:18.088151  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:18.088207  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:18.094752  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:18.094839  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:18.126505  984872 cri.go:89] found id: ""
	I1210 07:26:18.126532  984872 logs.go:282] 0 containers: []
	W1210 07:26:18.126541  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:18.126547  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:18.126609  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:18.183338  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:18.183363  984872 cri.go:89] found id: ""
	I1210 07:26:18.183373  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:18.183447  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:18.187555  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:18.187633  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:18.218986  984872 cri.go:89] found id: ""
	I1210 07:26:18.219015  984872 logs.go:282] 0 containers: []
	W1210 07:26:18.219024  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:18.219030  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:18.219097  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:18.252071  984872 cri.go:89] found id: ""
	I1210 07:26:18.252094  984872 logs.go:282] 0 containers: []
	W1210 07:26:18.252115  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:18.252131  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:18.252143  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:18.401907  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:18.401925  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:18.401939  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:18.464619  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:18.464656  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:18.499709  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:18.499740  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:18.544154  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:18.544233  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:18.576547  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:18.576580  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:18.613495  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:18.613520  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:18.646342  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:18.646375  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:18.707335  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:18.707372  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:21.224222  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:21.238250  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:21.238309  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:21.264982  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:21.265002  984872 cri.go:89] found id: ""
	I1210 07:26:21.265010  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:21.265081  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:21.269238  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:21.269310  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:21.303831  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:21.303851  984872 cri.go:89] found id: ""
	I1210 07:26:21.303859  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:21.303926  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:21.307826  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:21.307981  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:21.344561  984872 cri.go:89] found id: ""
	I1210 07:26:21.344638  984872 logs.go:282] 0 containers: []
	W1210 07:26:21.344661  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:21.344685  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:21.344802  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:21.376605  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:21.376677  984872 cri.go:89] found id: ""
	I1210 07:26:21.376699  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:21.376790  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:21.380819  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:21.380941  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:21.408824  984872 cri.go:89] found id: ""
	I1210 07:26:21.408852  984872 logs.go:282] 0 containers: []
	W1210 07:26:21.408861  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:21.408867  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:21.408931  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:21.443741  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:21.443765  984872 cri.go:89] found id: ""
	I1210 07:26:21.443773  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:21.443829  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:21.448019  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:21.448096  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:21.481537  984872 cri.go:89] found id: ""
	I1210 07:26:21.481564  984872 logs.go:282] 0 containers: []
	W1210 07:26:21.481572  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:21.481578  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:21.481636  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:21.515544  984872 cri.go:89] found id: ""
	I1210 07:26:21.515571  984872 logs.go:282] 0 containers: []
	W1210 07:26:21.515581  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:21.515595  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:21.515607  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:21.595804  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:21.595846  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:21.625393  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:21.625426  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:21.693911  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:21.693949  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:21.730607  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:21.730651  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:21.755303  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:21.755334  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:21.840786  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:21.840807  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:21.840823  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:21.880978  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:21.881013  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:21.936709  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:21.936747  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:24.499108  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:24.509380  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:24.509453  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:24.534161  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:24.534183  984872 cri.go:89] found id: ""
	I1210 07:26:24.534191  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:24.534245  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:24.537964  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:24.538037  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:24.562459  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:24.562504  984872 cri.go:89] found id: ""
	I1210 07:26:24.562513  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:24.562569  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:24.566333  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:24.566409  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:24.591315  984872 cri.go:89] found id: ""
	I1210 07:26:24.591340  984872 logs.go:282] 0 containers: []
	W1210 07:26:24.591349  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:24.591356  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:24.591417  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:24.621145  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:24.621170  984872 cri.go:89] found id: ""
	I1210 07:26:24.621179  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:24.621242  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:24.625093  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:24.625175  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:24.652646  984872 cri.go:89] found id: ""
	I1210 07:26:24.652670  984872 logs.go:282] 0 containers: []
	W1210 07:26:24.652685  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:24.652692  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:24.652754  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:24.682939  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:24.682963  984872 cri.go:89] found id: ""
	I1210 07:26:24.682971  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:24.683030  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:24.686810  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:24.686885  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:24.711225  984872 cri.go:89] found id: ""
	I1210 07:26:24.711250  984872 logs.go:282] 0 containers: []
	W1210 07:26:24.711260  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:24.711267  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:24.711327  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:24.738340  984872 cri.go:89] found id: ""
	I1210 07:26:24.738366  984872 logs.go:282] 0 containers: []
	W1210 07:26:24.738375  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:24.738388  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:24.738399  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:24.812918  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:24.812968  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:24.873882  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:24.873950  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:24.914021  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:24.914055  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:24.967114  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:24.967194  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:24.988781  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:24.988860  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:25.087875  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:25.087957  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:25.087987  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:25.152095  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:25.152186  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:25.197485  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:25.197561  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:27.727783  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:27.738364  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:27.738440  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:27.767731  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:27.767752  984872 cri.go:89] found id: ""
	I1210 07:26:27.767760  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:27.767823  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:27.771663  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:27.771738  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:27.806773  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:27.806796  984872 cri.go:89] found id: ""
	I1210 07:26:27.806804  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:27.806861  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:27.810829  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:27.810904  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:27.839423  984872 cri.go:89] found id: ""
	I1210 07:26:27.839449  984872 logs.go:282] 0 containers: []
	W1210 07:26:27.839458  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:27.839464  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:27.839525  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:27.873541  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:27.873563  984872 cri.go:89] found id: ""
	I1210 07:26:27.873572  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:27.873629  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:27.877460  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:27.877537  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:27.902850  984872 cri.go:89] found id: ""
	I1210 07:26:27.902874  984872 logs.go:282] 0 containers: []
	W1210 07:26:27.902882  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:27.902889  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:27.902950  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:27.928985  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:27.929007  984872 cri.go:89] found id: ""
	I1210 07:26:27.929016  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:27.929079  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:27.933119  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:27.933201  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:27.958611  984872 cri.go:89] found id: ""
	I1210 07:26:27.958638  984872 logs.go:282] 0 containers: []
	W1210 07:26:27.958647  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:27.958653  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:27.958714  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:27.985189  984872 cri.go:89] found id: ""
	I1210 07:26:27.985216  984872 logs.go:282] 0 containers: []
	W1210 07:26:27.985225  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:27.985239  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:27.985251  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:28.044606  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:28.044643  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:28.084799  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:28.084832  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:28.114709  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:28.114738  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:28.154114  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:28.154146  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:28.189183  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:28.189232  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:28.218211  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:28.218245  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:28.235039  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:28.235070  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:28.302789  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:28.302812  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:28.302827  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:30.854043  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:30.864252  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:30.864324  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:30.890180  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:30.890209  984872 cri.go:89] found id: ""
	I1210 07:26:30.890220  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:30.890287  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:30.893973  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:30.894050  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:30.921165  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:30.921191  984872 cri.go:89] found id: ""
	I1210 07:26:30.921199  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:30.921296  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:30.925206  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:30.925280  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:30.951267  984872 cri.go:89] found id: ""
	I1210 07:26:30.951294  984872 logs.go:282] 0 containers: []
	W1210 07:26:30.951303  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:30.951309  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:30.951376  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:30.977018  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:30.977040  984872 cri.go:89] found id: ""
	I1210 07:26:30.977049  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:30.977103  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:30.980719  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:30.980792  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:31.008061  984872 cri.go:89] found id: ""
	I1210 07:26:31.008088  984872 logs.go:282] 0 containers: []
	W1210 07:26:31.008097  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:31.008103  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:31.008170  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:31.032824  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:31.032843  984872 cri.go:89] found id: ""
	I1210 07:26:31.032851  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:31.032909  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:31.036640  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:31.036713  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:31.062696  984872 cri.go:89] found id: ""
	I1210 07:26:31.062727  984872 logs.go:282] 0 containers: []
	W1210 07:26:31.062736  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:31.062743  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:31.062804  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:31.088825  984872 cri.go:89] found id: ""
	I1210 07:26:31.088855  984872 logs.go:282] 0 containers: []
	W1210 07:26:31.088864  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:31.088881  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:31.088894  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:31.147278  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:31.147315  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:31.216619  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:31.216644  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:31.216657  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:31.249497  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:31.249531  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:31.282293  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:31.282323  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:31.308599  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:31.308626  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:31.345419  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:31.345452  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:31.361834  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:31.361862  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:31.391682  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:31.391715  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:33.936389  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:33.946788  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:33.946858  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:33.978037  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:33.978067  984872 cri.go:89] found id: ""
	I1210 07:26:33.978076  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:33.978137  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:33.981789  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:33.981862  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:34.008287  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:34.008308  984872 cri.go:89] found id: ""
	I1210 07:26:34.008316  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:34.008376  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:34.012425  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:34.012499  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:34.040785  984872 cri.go:89] found id: ""
	I1210 07:26:34.040807  984872 logs.go:282] 0 containers: []
	W1210 07:26:34.040815  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:34.040822  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:34.040885  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:34.066767  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:34.066787  984872 cri.go:89] found id: ""
	I1210 07:26:34.066795  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:34.066852  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:34.070613  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:34.070690  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:34.095549  984872 cri.go:89] found id: ""
	I1210 07:26:34.095574  984872 logs.go:282] 0 containers: []
	W1210 07:26:34.095582  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:34.095594  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:34.095653  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:34.121268  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:34.121293  984872 cri.go:89] found id: ""
	I1210 07:26:34.121302  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:34.121357  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:34.125035  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:34.125119  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:34.155757  984872 cri.go:89] found id: ""
	I1210 07:26:34.155780  984872 logs.go:282] 0 containers: []
	W1210 07:26:34.155789  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:34.155795  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:34.155854  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:34.179507  984872 cri.go:89] found id: ""
	I1210 07:26:34.179532  984872 logs.go:282] 0 containers: []
	W1210 07:26:34.179541  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:34.179557  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:34.179573  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:34.195884  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:34.195974  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:34.222660  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:34.222692  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:34.254368  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:34.254437  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:34.317837  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:34.317860  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:34.317873  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:34.351603  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:34.351636  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:34.388881  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:34.388914  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:34.423905  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:34.423947  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:34.453949  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:34.453988  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:37.016208  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:37.029721  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:37.029810  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:37.058057  984872 cri.go:89] found id: "a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:37.058080  984872 cri.go:89] found id: ""
	I1210 07:26:37.058089  984872 logs.go:282] 1 containers: [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4]
	I1210 07:26:37.058147  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:37.062509  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:26:37.062582  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:37.088650  984872 cri.go:89] found id: "ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:37.088676  984872 cri.go:89] found id: ""
	I1210 07:26:37.088685  984872 logs.go:282] 1 containers: [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1]
	I1210 07:26:37.088748  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:37.092585  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:26:37.092660  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:37.122327  984872 cri.go:89] found id: ""
	I1210 07:26:37.122353  984872 logs.go:282] 0 containers: []
	W1210 07:26:37.122362  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:26:37.122369  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:37.122433  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:37.154345  984872 cri.go:89] found id: "4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:37.154369  984872 cri.go:89] found id: ""
	I1210 07:26:37.154384  984872 logs.go:282] 1 containers: [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732]
	I1210 07:26:37.154444  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:37.158534  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:37.158642  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:37.185680  984872 cri.go:89] found id: ""
	I1210 07:26:37.185706  984872 logs.go:282] 0 containers: []
	W1210 07:26:37.185715  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:37.185722  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:37.185838  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:37.211842  984872 cri.go:89] found id: "a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:37.211875  984872 cri.go:89] found id: ""
	I1210 07:26:37.211884  984872 logs.go:282] 1 containers: [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c]
	I1210 07:26:37.211963  984872 ssh_runner.go:195] Run: which crictl
	I1210 07:26:37.215856  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:37.215961  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:37.240831  984872 cri.go:89] found id: ""
	I1210 07:26:37.240856  984872 logs.go:282] 0 containers: []
	W1210 07:26:37.240865  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:37.240872  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:26:37.240933  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:26:37.268416  984872 cri.go:89] found id: ""
	I1210 07:26:37.268494  984872 logs.go:282] 0 containers: []
	W1210 07:26:37.268511  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:26:37.268527  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:37.268539  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:37.285387  984872 logs.go:123] Gathering logs for kube-apiserver [a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4] ...
	I1210 07:26:37.285425  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4"
	I1210 07:26:37.331815  984872 logs.go:123] Gathering logs for etcd [ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1] ...
	I1210 07:26:37.331850  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1"
	I1210 07:26:37.367270  984872 logs.go:123] Gathering logs for kube-scheduler [4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732] ...
	I1210 07:26:37.367300  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732"
	I1210 07:26:37.396077  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:26:37.396103  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:26:37.425064  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:26:37.425101  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:37.454611  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:37.454642  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:37.513330  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:37.513367  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:37.590276  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:26:37.590340  984872 logs.go:123] Gathering logs for kube-controller-manager [a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c] ...
	I1210 07:26:37.590369  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c"
	I1210 07:26:40.126969  984872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:40.143157  984872 kubeadm.go:602] duration metric: took 4m5.331351583s to restartPrimaryControlPlane
	W1210 07:26:40.143228  984872 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 07:26:40.143314  984872 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:26:40.629144  984872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:26:40.645947  984872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:26:40.655463  984872 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:26:40.655530  984872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:26:40.663468  984872 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:26:40.663489  984872 kubeadm.go:158] found existing configuration files:
	
	I1210 07:26:40.663539  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:26:40.671251  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:26:40.671318  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:26:40.679022  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:26:40.686754  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:26:40.686861  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:26:40.694977  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:26:40.702797  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:26:40.702914  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:26:40.710112  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:26:40.718011  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:26:40.718104  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:26:40.725489  984872 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:26:40.766155  984872 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:26:40.766272  984872 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:26:40.833824  984872 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:26:40.833927  984872 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:26:40.833985  984872 kubeadm.go:319] OS: Linux
	I1210 07:26:40.834061  984872 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:26:40.834141  984872 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:26:40.834221  984872 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:26:40.834316  984872 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:26:40.834422  984872 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:26:40.834507  984872 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:26:40.834553  984872 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:26:40.834607  984872 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:26:40.834684  984872 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:26:40.898601  984872 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:26:40.898793  984872 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:26:40.898960  984872 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:26:49.061628  984872 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:26:49.064539  984872 out.go:252]   - Generating certificates and keys ...
	I1210 07:26:49.064633  984872 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:26:49.064700  984872 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:26:49.064773  984872 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:26:49.064843  984872 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:26:49.064914  984872 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:26:49.064967  984872 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:26:49.065027  984872 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:26:49.065087  984872 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:26:49.065156  984872 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:26:49.065224  984872 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:26:49.065260  984872 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:26:49.065312  984872 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:26:49.124261  984872 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:26:49.306690  984872 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:26:49.610826  984872 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:26:49.810350  984872 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:26:49.935545  984872 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:26:49.938849  984872 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:26:49.939409  984872 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:26:49.942760  984872 out.go:252]   - Booting up control plane ...
	I1210 07:26:49.942857  984872 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:26:49.942930  984872 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:26:49.942993  984872 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:26:49.973582  984872 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:26:49.974279  984872 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:26:49.985623  984872 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:26:49.986859  984872 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:26:49.995009  984872 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:26:50.192873  984872 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:26:50.192989  984872 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:30:50.193034  984872 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000482007s
	I1210 07:30:50.193066  984872 kubeadm.go:319] 
	I1210 07:30:50.193136  984872 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:30:50.193170  984872 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:30:50.193278  984872 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:30:50.193293  984872 kubeadm.go:319] 
	I1210 07:30:50.193404  984872 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:30:50.193441  984872 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:30:50.193478  984872 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:30:50.193486  984872 kubeadm.go:319] 
	I1210 07:30:50.197846  984872 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:30:50.198328  984872 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:30:50.198441  984872 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:30:50.198766  984872 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:30:50.198789  984872 kubeadm.go:319] 
	I1210 07:30:50.198891  984872 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:30:50.198975  984872 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000482007s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000482007s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:30:50.199057  984872 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:30:50.608934  984872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:30:50.622244  984872 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:30:50.622312  984872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:30:50.630412  984872 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:30:50.630435  984872 kubeadm.go:158] found existing configuration files:
	
	I1210 07:30:50.630573  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:30:50.638307  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:30:50.638378  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:30:50.645798  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:30:50.653441  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:30:50.653510  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:30:50.660964  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:30:50.668663  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:30:50.668727  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:30:50.676541  984872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:30:50.684295  984872 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:30:50.684382  984872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:30:50.691894  984872 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:30:50.732602  984872 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:30:50.732823  984872 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:30:50.802633  984872 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:30:50.802709  984872 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:30:50.802753  984872 kubeadm.go:319] OS: Linux
	I1210 07:30:50.802809  984872 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:30:50.802865  984872 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:30:50.802916  984872 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:30:50.802987  984872 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:30:50.803048  984872 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:30:50.803138  984872 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:30:50.803223  984872 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:30:50.803287  984872 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:30:50.803359  984872 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:30:50.878837  984872 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:30:50.878947  984872 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:30:50.879039  984872 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:30:50.884956  984872 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:30:50.890395  984872 out.go:252]   - Generating certificates and keys ...
	I1210 07:30:50.890512  984872 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:30:50.890582  984872 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:30:50.890671  984872 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:30:50.890771  984872 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:30:50.890856  984872 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:30:50.890915  984872 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:30:50.890992  984872 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:30:50.891060  984872 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:30:50.891152  984872 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:30:50.891248  984872 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:30:50.891286  984872 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:30:50.891365  984872 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:30:51.046959  984872 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:30:51.146318  984872 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:30:51.249887  984872 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:30:51.429892  984872 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:30:51.544722  984872 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:30:51.545556  984872 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:30:51.550084  984872 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:30:51.553397  984872 out.go:252]   - Booting up control plane ...
	I1210 07:30:51.553503  984872 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:30:51.553580  984872 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:30:51.554424  984872 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:30:51.577315  984872 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:30:51.577448  984872 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:30:51.585252  984872 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:30:51.585834  984872 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:30:51.586070  984872 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:30:51.724688  984872 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:30:51.724852  984872 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:34:51.724760  984872 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000520681s
	I1210 07:34:51.724787  984872 kubeadm.go:319] 
	I1210 07:34:51.724842  984872 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:34:51.724878  984872 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:34:51.724977  984872 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:34:51.724983  984872 kubeadm.go:319] 
	I1210 07:34:51.725081  984872 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:34:51.725112  984872 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:34:51.725140  984872 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:34:51.725144  984872 kubeadm.go:319] 
	I1210 07:34:51.729604  984872 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:34:51.730002  984872 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:34:51.730103  984872 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:34:51.730350  984872 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:34:51.730356  984872 kubeadm.go:319] 
	I1210 07:34:51.730420  984872 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:34:51.730506  984872 kubeadm.go:403] duration metric: took 12m16.99305872s to StartCluster
	I1210 07:34:51.730542  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:34:51.730605  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:34:51.771532  984872 cri.go:89] found id: ""
	I1210 07:34:51.771554  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.771563  984872 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:51.771569  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:34:51.771632  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:34:51.806081  984872 cri.go:89] found id: ""
	I1210 07:34:51.806104  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.806112  984872 logs.go:284] No container was found matching "etcd"
	I1210 07:34:51.806118  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:34:51.806206  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:34:51.836422  984872 cri.go:89] found id: ""
	I1210 07:34:51.836445  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.836453  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:34:51.836459  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:34:51.836517  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:34:51.865937  984872 cri.go:89] found id: ""
	I1210 07:34:51.865961  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.865969  984872 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:51.865976  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:34:51.866037  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:34:51.896136  984872 cri.go:89] found id: ""
	I1210 07:34:51.896164  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.896173  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:51.896180  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:34:51.896243  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:34:51.929918  984872 cri.go:89] found id: ""
	I1210 07:34:51.929948  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.929957  984872 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:51.929963  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:34:51.930023  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:34:51.956399  984872 cri.go:89] found id: ""
	I1210 07:34:51.956424  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.956433  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:51.956439  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:34:51.956499  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:34:51.981888  984872 cri.go:89] found id: ""
	I1210 07:34:51.981913  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.981922  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:34:51.981932  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:51.981945  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:51.998789  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:51.998868  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:52.195500  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:52.195524  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:34:52.195538  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:34:52.260649  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:34:52.260694  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.312016  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:52.312045  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:34:52.382203  984872 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000520681s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:34:52.382271  984872 out.go:285] * 
	* 
	W1210 07:34:52.382443  984872 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000520681s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000520681s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:34:52.382477  984872 out.go:285] * 
	* 
	W1210 07:34:52.384689  984872 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:52.390225  984872 out.go:203] 
	W1210 07:34:52.393892  984872 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000520681s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000520681s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:34:52.394169  984872 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:34:52.394201  984872 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:34:52.398189  984872 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-006690 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-006690 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-006690 version --output=json: exit status 1 (154.668747ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-10 07:34:53.517180611 +0000 UTC m=+4951.920231007
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-006690
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-006690:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9a63ec50993efd455fe5a6a5af7cc7410c8e841162f9a1a04af72629abe4ef0",
	        "Created": "2025-12-10T07:21:41.547646072Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 985132,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:22:17.501946699Z",
	            "FinishedAt": "2025-12-10T07:22:16.735662271Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/c9a63ec50993efd455fe5a6a5af7cc7410c8e841162f9a1a04af72629abe4ef0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9a63ec50993efd455fe5a6a5af7cc7410c8e841162f9a1a04af72629abe4ef0/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9a63ec50993efd455fe5a6a5af7cc7410c8e841162f9a1a04af72629abe4ef0/hosts",
	        "LogPath": "/var/lib/docker/containers/c9a63ec50993efd455fe5a6a5af7cc7410c8e841162f9a1a04af72629abe4ef0/c9a63ec50993efd455fe5a6a5af7cc7410c8e841162f9a1a04af72629abe4ef0-json.log",
	        "Name": "/kubernetes-upgrade-006690",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-006690:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-006690",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9a63ec50993efd455fe5a6a5af7cc7410c8e841162f9a1a04af72629abe4ef0",
	                "LowerDir": "/var/lib/docker/overlay2/61bed7fd8d0e91f264865e7a2f8375d2d181214ae3fd1e84f40eb5081dd0d5ee-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/61bed7fd8d0e91f264865e7a2f8375d2d181214ae3fd1e84f40eb5081dd0d5ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/61bed7fd8d0e91f264865e7a2f8375d2d181214ae3fd1e84f40eb5081dd0d5ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/61bed7fd8d0e91f264865e7a2f8375d2d181214ae3fd1e84f40eb5081dd0d5ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-006690",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-006690/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-006690",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-006690",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-006690",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb32b91be53c82874f2b41d6b086bd96f101053ed330155baf483c79dbbb49d4",
	            "SandboxKey": "/var/run/docker/netns/fb32b91be53c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33760"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33761"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33764"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33762"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33763"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-006690": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:a3:9b:ac:44:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bea664b0fb0094dd1c759e7c972a6b01f828dbbc5d2cace255051a0b55e8b739",
	                    "EndpointID": "78193c77de2a3768364086b6583bf220a0ecb9fa63336ca0d92a046383633337",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-006690",
	                        "c9a63ec50993"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-006690 -n kubernetes-upgrade-006690
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-006690 -n kubernetes-upgrade-006690: exit status 2 (436.288755ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-006690 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-006690 logs -n 25: (1.218252422s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-945825 sudo systemctl status kubelet --all --full --no-pager                                           │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo systemctl cat kubelet --no-pager                                                           │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo cat /etc/kubernetes/kubelet.conf                                                           │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo cat /var/lib/kubelet/config.yaml                                                           │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo systemctl status docker --all --full --no-pager                                            │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo systemctl cat docker --no-pager                                                            │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo cat /etc/docker/daemon.json                                                                │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo docker system info                                                                         │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo cri-dockerd --version                                                                      │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo systemctl cat containerd --no-pager                                                        │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo cat /etc/containerd/config.toml                                                            │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo containerd config dump                                                                     │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo systemctl status crio --all --full --no-pager                                              │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo systemctl cat crio --no-pager                                                              │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ ssh     │ -p cilium-945825 sudo crio config                                                                                │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	│ delete  │ -p cilium-945825                                                                                                 │ cilium-945825            │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │ 10 Dec 25 07:34 UTC │
	│ start   │ -p force-systemd-env-355914 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-355914 │ jenkins │ v1.37.0 │ 10 Dec 25 07:34 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:34:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:34:23.881012 1029448 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:34:23.881125 1029448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:34:23.881135 1029448 out.go:374] Setting ErrFile to fd 2...
	I1210 07:34:23.881140 1029448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:34:23.881398 1029448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:34:23.881798 1029448 out.go:368] Setting JSON to false
	I1210 07:34:23.882692 1029448 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22588,"bootTime":1765329476,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:34:23.882763 1029448 start.go:143] virtualization:  
	I1210 07:34:23.886173 1029448 out.go:179] * [force-systemd-env-355914] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:34:23.889171 1029448 notify.go:221] Checking for updates...
	I1210 07:34:23.889714 1029448 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:34:23.892735 1029448 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:34:23.895599 1029448 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:34:23.898576 1029448 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:34:23.901518 1029448 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:34:23.904453 1029448 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1210 07:34:23.907857 1029448 config.go:182] Loaded profile config "kubernetes-upgrade-006690": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:34:23.908022 1029448 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:34:23.942840 1029448 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:34:23.943016 1029448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:34:24.005001 1029448 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:34:23.992370408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:34:24.005135 1029448 docker.go:319] overlay module found
	I1210 07:34:24.008331 1029448 out.go:179] * Using the docker driver based on user configuration
	I1210 07:34:24.011282 1029448 start.go:309] selected driver: docker
	I1210 07:34:24.011311 1029448 start.go:927] validating driver "docker" against <nil>
	I1210 07:34:24.011325 1029448 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:34:24.012071 1029448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:34:24.073867 1029448 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:34:24.063939855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:34:24.074027 1029448 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:34:24.074264 1029448 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 07:34:24.077179 1029448 out.go:179] * Using Docker driver with root privileges
	I1210 07:34:24.080032 1029448 cni.go:84] Creating CNI manager for ""
	I1210 07:34:24.080103 1029448 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:34:24.080116 1029448 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:34:24.080186 1029448 start.go:353] cluster config:
	{Name:force-systemd-env-355914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-355914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:34:24.085190 1029448 out.go:179] * Starting "force-systemd-env-355914" primary control-plane node in "force-systemd-env-355914" cluster
	I1210 07:34:24.087918 1029448 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:34:24.090802 1029448 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:34:24.093640 1029448 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1210 07:34:24.093689 1029448 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1210 07:34:24.093704 1029448 cache.go:65] Caching tarball of preloaded images
	I1210 07:34:24.093725 1029448 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:34:24.093806 1029448 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:34:24.093817 1029448 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1210 07:34:24.093924 1029448 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/config.json ...
	I1210 07:34:24.093943 1029448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/config.json: {Name:mk2f46780f22b80679e76a52c5a3cf4920f258c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:34:24.113218 1029448 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:34:24.113243 1029448 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:34:24.113263 1029448 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:34:24.113295 1029448 start.go:360] acquireMachinesLock for force-systemd-env-355914: {Name:mk4b5439c71ed0038ec946d05694504f5d1869ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:34:24.113414 1029448 start.go:364] duration metric: took 98.144µs to acquireMachinesLock for "force-systemd-env-355914"
	I1210 07:34:24.113445 1029448 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-355914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-355914 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:34:24.113525 1029448 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:34:24.116827 1029448 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:34:24.117079 1029448 start.go:159] libmachine.API.Create for "force-systemd-env-355914" (driver="docker")
	I1210 07:34:24.117119 1029448 client.go:173] LocalClient.Create starting
	I1210 07:34:24.117190 1029448 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 07:34:24.117241 1029448 main.go:143] libmachine: Decoding PEM data...
	I1210 07:34:24.117261 1029448 main.go:143] libmachine: Parsing certificate...
	I1210 07:34:24.117314 1029448 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 07:34:24.117338 1029448 main.go:143] libmachine: Decoding PEM data...
	I1210 07:34:24.117353 1029448 main.go:143] libmachine: Parsing certificate...
	I1210 07:34:24.117729 1029448 cli_runner.go:164] Run: docker network inspect force-systemd-env-355914 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:34:24.133680 1029448 cli_runner.go:211] docker network inspect force-systemd-env-355914 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:34:24.133790 1029448 network_create.go:284] running [docker network inspect force-systemd-env-355914] to gather additional debugging logs...
	I1210 07:34:24.133812 1029448 cli_runner.go:164] Run: docker network inspect force-systemd-env-355914
	W1210 07:34:24.149286 1029448 cli_runner.go:211] docker network inspect force-systemd-env-355914 returned with exit code 1
	I1210 07:34:24.149329 1029448 network_create.go:287] error running [docker network inspect force-systemd-env-355914]: docker network inspect force-systemd-env-355914: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-355914 not found
	I1210 07:34:24.149344 1029448 network_create.go:289] output of [docker network inspect force-systemd-env-355914]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-355914 not found
	
	** /stderr **
	I1210 07:34:24.149444 1029448 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:34:24.166598 1029448 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 07:34:24.166959 1029448 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 07:34:24.167283 1029448 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 07:34:24.167669 1029448 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bea664b0fb00 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e9:c4:f2:18:65} reservation:<nil>}
	I1210 07:34:24.168106 1029448 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2eab0}
	I1210 07:34:24.168129 1029448 network_create.go:124] attempt to create docker network force-systemd-env-355914 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:34:24.168190 1029448 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-355914 force-systemd-env-355914
	I1210 07:34:24.230650 1029448 network_create.go:108] docker network force-systemd-env-355914 192.168.85.0/24 created
	I1210 07:34:24.230686 1029448 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-355914" container
	I1210 07:34:24.230778 1029448 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:34:24.247041 1029448 cli_runner.go:164] Run: docker volume create force-systemd-env-355914 --label name.minikube.sigs.k8s.io=force-systemd-env-355914 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:34:24.265671 1029448 oci.go:103] Successfully created a docker volume force-systemd-env-355914
	I1210 07:34:24.265763 1029448 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-355914-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-355914 --entrypoint /usr/bin/test -v force-systemd-env-355914:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:34:24.761494 1029448 oci.go:107] Successfully prepared a docker volume force-systemd-env-355914
	I1210 07:34:24.761567 1029448 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1210 07:34:24.761582 1029448 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:34:24.761663 1029448 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-355914:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:34:28.758593 1029448 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-355914:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.996889242s)
	I1210 07:34:28.758631 1029448 kic.go:203] duration metric: took 3.997045364s to extract preloaded images to volume ...
	W1210 07:34:28.758784 1029448 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:34:28.758909 1029448 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:34:28.822018 1029448 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-355914 --name force-systemd-env-355914 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-355914 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-355914 --network force-systemd-env-355914 --ip 192.168.85.2 --volume force-systemd-env-355914:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:34:29.153892 1029448 cli_runner.go:164] Run: docker container inspect force-systemd-env-355914 --format={{.State.Running}}
	I1210 07:34:29.179312 1029448 cli_runner.go:164] Run: docker container inspect force-systemd-env-355914 --format={{.State.Status}}
	I1210 07:34:29.200008 1029448 cli_runner.go:164] Run: docker exec force-systemd-env-355914 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:34:29.244680 1029448 oci.go:144] the created container "force-systemd-env-355914" has a running status.
	I1210 07:34:29.244718 1029448 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/force-systemd-env-355914/id_rsa...
	I1210 07:34:29.406194 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/force-systemd-env-355914/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1210 07:34:29.406295 1029448 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/force-systemd-env-355914/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:34:29.433016 1029448 cli_runner.go:164] Run: docker container inspect force-systemd-env-355914 --format={{.State.Status}}
	I1210 07:34:29.457942 1029448 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:34:29.457961 1029448 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-355914 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:34:29.533387 1029448 cli_runner.go:164] Run: docker container inspect force-systemd-env-355914 --format={{.State.Status}}
	I1210 07:34:29.562912 1029448 machine.go:94] provisionDockerMachine start ...
	I1210 07:34:29.563011 1029448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-355914
	I1210 07:34:29.588264 1029448 main.go:143] libmachine: Using SSH client type: native
	I1210 07:34:29.588618 1029448 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33785 <nil> <nil>}
	I1210 07:34:29.588628 1029448 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:34:29.589333 1029448 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47062->127.0.0.1:33785: read: connection reset by peer
	I1210 07:34:32.721998 1029448 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-355914
	
	I1210 07:34:32.722024 1029448 ubuntu.go:182] provisioning hostname "force-systemd-env-355914"
	I1210 07:34:32.722110 1029448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-355914
	I1210 07:34:32.739481 1029448 main.go:143] libmachine: Using SSH client type: native
	I1210 07:34:32.739813 1029448 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33785 <nil> <nil>}
	I1210 07:34:32.739831 1029448 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-355914 && echo "force-systemd-env-355914" | sudo tee /etc/hostname
	I1210 07:34:32.884619 1029448 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-355914
	
	I1210 07:34:32.884779 1029448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-355914
	I1210 07:34:32.902832 1029448 main.go:143] libmachine: Using SSH client type: native
	I1210 07:34:32.903150 1029448 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33785 <nil> <nil>}
	I1210 07:34:32.903172 1029448 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-355914' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-355914/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-355914' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:34:33.039311 1029448 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:34:33.039385 1029448 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:34:33.039436 1029448 ubuntu.go:190] setting up certificates
	I1210 07:34:33.039467 1029448 provision.go:84] configureAuth start
	I1210 07:34:33.039556 1029448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-355914
	I1210 07:34:33.056695 1029448 provision.go:143] copyHostCerts
	I1210 07:34:33.056747 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:34:33.056788 1029448 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:34:33.056796 1029448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:34:33.056879 1029448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:34:33.056967 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:34:33.056985 1029448 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:34:33.056989 1029448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:34:33.057015 1029448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:34:33.057077 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:34:33.057094 1029448 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:34:33.057098 1029448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:34:33.057122 1029448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:34:33.057174 1029448 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-355914 san=[127.0.0.1 192.168.85.2 force-systemd-env-355914 localhost minikube]
	I1210 07:34:33.326591 1029448 provision.go:177] copyRemoteCerts
	I1210 07:34:33.326666 1029448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:34:33.326712 1029448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-355914
	I1210 07:34:33.349828 1029448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33785 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/force-systemd-env-355914/id_rsa Username:docker}
	I1210 07:34:33.446285 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 07:34:33.446355 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:34:33.463760 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 07:34:33.463822 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 07:34:33.483210 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 07:34:33.483327 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:34:33.501324 1029448 provision.go:87] duration metric: took 461.82909ms to configureAuth
	I1210 07:34:33.501359 1029448 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:34:33.501595 1029448 config.go:182] Loaded profile config "force-systemd-env-355914": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 07:34:33.501608 1029448 machine.go:97] duration metric: took 3.938678867s to provisionDockerMachine
	I1210 07:34:33.501616 1029448 client.go:176] duration metric: took 9.384485794s to LocalClient.Create
	I1210 07:34:33.501634 1029448 start.go:167] duration metric: took 9.384560068s to libmachine.API.Create "force-systemd-env-355914"
	I1210 07:34:33.501646 1029448 start.go:293] postStartSetup for "force-systemd-env-355914" (driver="docker")
	I1210 07:34:33.501655 1029448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:34:33.501712 1029448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:34:33.501756 1029448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-355914
	I1210 07:34:33.518627 1029448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33785 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/force-systemd-env-355914/id_rsa Username:docker}
	I1210 07:34:33.614570 1029448 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:34:33.617872 1029448 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:34:33.617901 1029448 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:34:33.617913 1029448 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:34:33.617978 1029448 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:34:33.618064 1029448 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:34:33.618076 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /etc/ssl/certs/7867512.pem
	I1210 07:34:33.618179 1029448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:34:33.625920 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:34:33.642767 1029448 start.go:296] duration metric: took 141.106414ms for postStartSetup
	I1210 07:34:33.643141 1029448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-355914
	I1210 07:34:33.659685 1029448 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/config.json ...
	I1210 07:34:33.659968 1029448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:34:33.660018 1029448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-355914
	I1210 07:34:33.678562 1029448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33785 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/force-systemd-env-355914/id_rsa Username:docker}
	I1210 07:34:33.771604 1029448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:34:33.776169 1029448 start.go:128] duration metric: took 9.662630042s to createHost
	I1210 07:34:33.776199 1029448 start.go:83] releasing machines lock for "force-systemd-env-355914", held for 9.66277237s
	I1210 07:34:33.776273 1029448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-355914
	I1210 07:34:33.792973 1029448 ssh_runner.go:195] Run: cat /version.json
	I1210 07:34:33.793037 1029448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-355914
	I1210 07:34:33.793290 1029448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:34:33.793357 1029448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-355914
	I1210 07:34:33.816633 1029448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33785 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/force-systemd-env-355914/id_rsa Username:docker}
	I1210 07:34:33.822728 1029448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33785 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/force-systemd-env-355914/id_rsa Username:docker}
	I1210 07:34:33.910452 1029448 ssh_runner.go:195] Run: systemctl --version
	I1210 07:34:33.999288 1029448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:34:34.005573 1029448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:34:34.005654 1029448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:34:34.033671 1029448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:34:34.033698 1029448 start.go:496] detecting cgroup driver to use...
	I1210 07:34:34.033716 1029448 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1210 07:34:34.033782 1029448 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:34:34.056726 1029448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:34:34.073437 1029448 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:34:34.073517 1029448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:34:34.091620 1029448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:34:34.113791 1029448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:34:34.235890 1029448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:34:34.356858 1029448 docker.go:234] disabling docker service ...
	I1210 07:34:34.356926 1029448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:34:34.378012 1029448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:34:34.391057 1029448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:34:34.511474 1029448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:34:34.639164 1029448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:34:34.652836 1029448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:34:34.666962 1029448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:34:34.675835 1029448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:34:34.684468 1029448 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1210 07:34:34.684536 1029448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1210 07:34:34.693150 1029448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:34:34.701931 1029448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:34:34.710636 1029448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:34:34.719342 1029448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:34:34.727285 1029448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:34:34.737227 1029448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:34:34.746261 1029448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:34:34.755881 1029448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:34:34.763854 1029448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:34:34.771136 1029448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:34:34.908524 1029448 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:34:35.042975 1029448 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:34:35.043045 1029448 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:34:35.047018 1029448 start.go:564] Will wait 60s for crictl version
	I1210 07:34:35.047085 1029448 ssh_runner.go:195] Run: which crictl
	I1210 07:34:35.050882 1029448 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:34:35.075796 1029448 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:34:35.075965 1029448 ssh_runner.go:195] Run: containerd --version
	I1210 07:34:35.103101 1029448 ssh_runner.go:195] Run: containerd --version
	I1210 07:34:35.131835 1029448 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1210 07:34:35.134957 1029448 cli_runner.go:164] Run: docker network inspect force-systemd-env-355914 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:34:35.151760 1029448 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:34:35.156070 1029448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:34:35.167055 1029448 kubeadm.go:884] updating cluster {Name:force-systemd-env-355914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-355914 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:34:35.167217 1029448 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1210 07:34:35.167295 1029448 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:34:35.193589 1029448 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:34:35.193614 1029448 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:34:35.193679 1029448 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:34:35.218754 1029448 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:34:35.218785 1029448 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:34:35.218794 1029448 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1210 07:34:35.218890 1029448 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-355914 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-355914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:34:35.218963 1029448 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:34:35.245377 1029448 cni.go:84] Creating CNI manager for ""
	I1210 07:34:35.245402 1029448 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:34:35.245421 1029448 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:34:35.245445 1029448 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-355914 NodeName:force-systemd-env-355914 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:34:35.245569 1029448 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-355914"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:34:35.245643 1029448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 07:34:35.254144 1029448 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:34:35.254254 1029448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:34:35.262970 1029448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:34:35.276680 1029448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:34:35.290212 1029448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1210 07:34:35.304473 1029448 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:34:35.308392 1029448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:34:35.318487 1029448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:34:35.440638 1029448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:34:35.457101 1029448 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914 for IP: 192.168.85.2
	I1210 07:34:35.457179 1029448 certs.go:195] generating shared ca certs ...
	I1210 07:34:35.457213 1029448 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:34:35.457399 1029448 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:34:35.457481 1029448 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:34:35.457515 1029448 certs.go:257] generating profile certs ...
	I1210 07:34:35.457627 1029448 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/client.key
	I1210 07:34:35.457660 1029448 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/client.crt with IP's: []
	I1210 07:34:35.677816 1029448 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/client.crt ...
	I1210 07:34:35.677850 1029448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/client.crt: {Name:mk50adc400db791d2a34b42eece469ea6b2aaf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:34:35.678058 1029448 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/client.key ...
	I1210 07:34:35.678081 1029448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/client.key: {Name:mkecf9f33f6e890646ac952a50913b115ad830db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:34:35.678178 1029448 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.key.ce2a626e
	I1210 07:34:35.678198 1029448 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.crt.ce2a626e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 07:34:35.727951 1029448 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.crt.ce2a626e ...
	I1210 07:34:35.727983 1029448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.crt.ce2a626e: {Name:mkfd73776e1c624eae5acfff04fd7161e0eba07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:34:35.728162 1029448 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.key.ce2a626e ...
	I1210 07:34:35.728178 1029448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.key.ce2a626e: {Name:mkeeb76dafa9596d91cb7fa425059ae0c9698d74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:34:35.728268 1029448 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.crt.ce2a626e -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.crt
	I1210 07:34:35.728347 1029448 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.key.ce2a626e -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.key
	I1210 07:34:35.728408 1029448 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.key
	I1210 07:34:35.728426 1029448 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.crt with IP's: []
	I1210 07:34:36.069299 1029448 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.crt ...
	I1210 07:34:36.069334 1029448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.crt: {Name:mk3d63907fba7840e327664f8052e739a4059f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:34:36.069522 1029448 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.key ...
	I1210 07:34:36.069537 1029448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.key: {Name:mk62f15a7717eb799b032eab8dc8161e98668303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:34:36.069635 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 07:34:36.069659 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 07:34:36.069672 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 07:34:36.069688 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 07:34:36.069703 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 07:34:36.069721 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 07:34:36.069735 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 07:34:36.069750 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 07:34:36.069808 1029448 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:34:36.069855 1029448 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:34:36.069867 1029448 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:34:36.069895 1029448 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:34:36.069924 1029448 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:34:36.069954 1029448 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:34:36.070003 1029448 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:34:36.070041 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> /usr/share/ca-certificates/7867512.pem
	I1210 07:34:36.070055 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:34:36.070066 1029448 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem -> /usr/share/ca-certificates/786751.pem
	I1210 07:34:36.070615 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:34:36.090593 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:34:36.110102 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:34:36.128195 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:34:36.146860 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 07:34:36.165039 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:34:36.184381 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:34:36.201716 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/force-systemd-env-355914/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:34:36.218900 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:34:36.238146 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:34:36.256020 1029448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:34:36.273972 1029448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:34:36.286906 1029448 ssh_runner.go:195] Run: openssl version
	I1210 07:34:36.306986 1029448 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:34:36.318187 1029448 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:34:36.331785 1029448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:34:36.342942 1029448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:34:36.343081 1029448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:34:36.396239 1029448 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:34:36.403802 1029448 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:34:36.411343 1029448 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:34:36.419002 1029448 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:34:36.426556 1029448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:34:36.430326 1029448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:34:36.430395 1029448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:34:36.471517 1029448 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:34:36.479206 1029448 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:34:36.486804 1029448 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:34:36.494201 1029448 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:34:36.501806 1029448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:34:36.505878 1029448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:34:36.505948 1029448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:34:36.547596 1029448 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:34:36.555094 1029448 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 07:34:36.562683 1029448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:34:36.566221 1029448 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:34:36.566285 1029448 kubeadm.go:401] StartCluster: {Name:force-systemd-env-355914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-355914 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:34:36.566375 1029448 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:34:36.566438 1029448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:34:36.592723 1029448 cri.go:89] found id: ""
	I1210 07:34:36.592835 1029448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:34:36.600830 1029448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:34:36.608562 1029448 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:34:36.608661 1029448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:34:36.616813 1029448 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:34:36.616834 1029448 kubeadm.go:158] found existing configuration files:
	
	I1210 07:34:36.616918 1029448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:34:36.625000 1029448 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:34:36.625067 1029448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:34:36.632766 1029448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:34:36.640787 1029448 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:34:36.640864 1029448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:34:36.648479 1029448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:34:36.656260 1029448 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:34:36.656376 1029448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:34:36.663984 1029448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:34:36.671828 1029448 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:34:36.671946 1029448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:34:36.679442 1029448 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:34:36.720486 1029448 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 07:34:36.720558 1029448 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:34:36.744041 1029448 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:34:36.744184 1029448 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:34:36.744267 1029448 kubeadm.go:319] OS: Linux
	I1210 07:34:36.744346 1029448 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:34:36.744421 1029448 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:34:36.744529 1029448 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:34:36.744623 1029448 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:34:36.744701 1029448 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:34:36.744782 1029448 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:34:36.744853 1029448 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:34:36.744930 1029448 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:34:36.745002 1029448 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:34:36.816152 1029448 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:34:36.816274 1029448 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:34:36.816374 1029448 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:34:36.821285 1029448 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:34:36.828107 1029448 out.go:252]   - Generating certificates and keys ...
	I1210 07:34:36.828288 1029448 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:34:36.828414 1029448 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:34:37.886214 1029448 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:34:38.055732 1029448 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:34:38.413997 1029448 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:34:39.509297 1029448 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:34:39.800057 1029448 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:34:39.800429 1029448 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-355914 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:34:40.183671 1029448 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:34:40.184251 1029448 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-355914 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:34:40.467317 1029448 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:34:40.883717 1029448 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:34:41.224103 1029448 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:34:41.224482 1029448 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:34:41.428438 1029448 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:34:41.511188 1029448 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:34:42.074823 1029448 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:34:42.557455 1029448 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:34:42.946749 1029448 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:34:42.947578 1029448 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:34:42.950507 1029448 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:34:42.954185 1029448 out.go:252]   - Booting up control plane ...
	I1210 07:34:42.954289 1029448 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:34:42.954368 1029448 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:34:42.954435 1029448 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:34:42.972593 1029448 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:34:42.972848 1029448 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:34:42.981188 1029448 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:34:42.981633 1029448 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:34:42.981863 1029448 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:34:43.131718 1029448 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:34:43.131840 1029448 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:34:45.132836 1029448 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001250908s
	I1210 07:34:45.138745 1029448 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:34:45.138865 1029448 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 07:34:45.138977 1029448 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:34:45.139080 1029448 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:34:48.759285 1029448 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.621836905s
	I1210 07:34:50.694597 1029448 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.557492256s
	I1210 07:34:51.638324 1029448 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501196211s
	I1210 07:34:51.672995 1029448 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:34:51.690971 1029448 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:34:51.705717 1029448 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:34:51.705930 1029448 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-env-355914 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:34:51.721766 1029448 kubeadm.go:319] [bootstrap-token] Using token: 2mn6g6.e5xr3hz7deo6p7s3
	I1210 07:34:51.724760  984872 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000520681s
	I1210 07:34:51.724787  984872 kubeadm.go:319] 
	I1210 07:34:51.724842  984872 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:34:51.724878  984872 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:34:51.724977  984872 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:34:51.724983  984872 kubeadm.go:319] 
	I1210 07:34:51.725081  984872 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:34:51.725112  984872 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:34:51.725140  984872 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:34:51.725144  984872 kubeadm.go:319] 
	I1210 07:34:51.729604  984872 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:34:51.730002  984872 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:34:51.730103  984872 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:34:51.730350  984872 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:34:51.730356  984872 kubeadm.go:319] 
	I1210 07:34:51.730420  984872 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:34:51.730506  984872 kubeadm.go:403] duration metric: took 12m16.99305872s to StartCluster
	I1210 07:34:51.730542  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:34:51.730605  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:34:51.771532  984872 cri.go:89] found id: ""
	I1210 07:34:51.771554  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.771563  984872 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:51.771569  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:34:51.771632  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:34:51.806081  984872 cri.go:89] found id: ""
	I1210 07:34:51.806104  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.806112  984872 logs.go:284] No container was found matching "etcd"
	I1210 07:34:51.806118  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:34:51.806206  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:34:51.836422  984872 cri.go:89] found id: ""
	I1210 07:34:51.836445  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.836453  984872 logs.go:284] No container was found matching "coredns"
	I1210 07:34:51.836459  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:34:51.836517  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:34:51.865937  984872 cri.go:89] found id: ""
	I1210 07:34:51.865961  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.865969  984872 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:51.865976  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:34:51.866037  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:34:51.896136  984872 cri.go:89] found id: ""
	I1210 07:34:51.896164  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.896173  984872 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:51.896180  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:34:51.896243  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:34:51.929918  984872 cri.go:89] found id: ""
	I1210 07:34:51.929948  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.929957  984872 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:51.929963  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:34:51.930023  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:34:51.956399  984872 cri.go:89] found id: ""
	I1210 07:34:51.956424  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.956433  984872 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:51.956439  984872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:34:51.956499  984872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:34:51.981888  984872 cri.go:89] found id: ""
	I1210 07:34:51.981913  984872 logs.go:282] 0 containers: []
	W1210 07:34:51.981922  984872 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:34:51.981932  984872 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:51.981945  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:51.998789  984872 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:51.998868  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:52.195500  984872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:52.195524  984872 logs.go:123] Gathering logs for containerd ...
	I1210 07:34:52.195538  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:34:52.260649  984872 logs.go:123] Gathering logs for container status ...
	I1210 07:34:52.260694  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.312016  984872 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:52.312045  984872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:34:52.382203  984872 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000520681s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:34:52.382271  984872 out.go:285] * 
	W1210 07:34:52.382443  984872 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000520681s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:34:52.382477  984872 out.go:285] * 
	W1210 07:34:52.384689  984872 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:52.390225  984872 out.go:203] 
	W1210 07:34:52.393892  984872 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000520681s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:34:52.394169  984872 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:34:52.394201  984872 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:34:52.398189  984872 out.go:203] 
	I1210 07:34:51.724651 1029448 out.go:252]   - Configuring RBAC rules ...
	I1210 07:34:51.724774 1029448 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:34:51.734866 1029448 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:34:51.754438 1029448 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:34:51.761213 1029448 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:34:51.767166 1029448 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:34:51.773186 1029448 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:34:52.049403 1029448 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:34:52.614856 1029448 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:34:53.051984 1029448 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:34:53.053691 1029448 kubeadm.go:319] 
	I1210 07:34:53.053763 1029448 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:34:53.053768 1029448 kubeadm.go:319] 
	I1210 07:34:53.053845 1029448 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:34:53.053849 1029448 kubeadm.go:319] 
	I1210 07:34:53.053880 1029448 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:34:53.053940 1029448 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:34:53.053991 1029448 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:34:53.053995 1029448 kubeadm.go:319] 
	I1210 07:34:53.054048 1029448 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:34:53.054061 1029448 kubeadm.go:319] 
	I1210 07:34:53.054109 1029448 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:34:53.054114 1029448 kubeadm.go:319] 
	I1210 07:34:53.054166 1029448 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:34:53.054241 1029448 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:34:53.054310 1029448 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:34:53.054314 1029448 kubeadm.go:319] 
	I1210 07:34:53.054399 1029448 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:34:53.054491 1029448 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:34:53.054497 1029448 kubeadm.go:319] 
	I1210 07:34:53.054581 1029448 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2mn6g6.e5xr3hz7deo6p7s3 \
	I1210 07:34:53.054684 1029448 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e9f3cb78cb77d4f01fb49055e1f2de1580fc701c72db340d5c15a42a39b8dd0 \
	I1210 07:34:53.054704 1029448 kubeadm.go:319] 	--control-plane 
	I1210 07:34:53.054708 1029448 kubeadm.go:319] 
	I1210 07:34:53.054793 1029448 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:34:53.054797 1029448 kubeadm.go:319] 
	I1210 07:34:53.061720 1029448 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2mn6g6.e5xr3hz7deo6p7s3 \
	I1210 07:34:53.061838 1029448 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e9f3cb78cb77d4f01fb49055e1f2de1580fc701c72db340d5c15a42a39b8dd0 
	I1210 07:34:53.067310 1029448 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:34:53.067534 1029448 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:34:53.067639 1029448 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:34:53.067656 1029448 cni.go:84] Creating CNI manager for ""
	I1210 07:34:53.067665 1029448 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:34:53.071091 1029448 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 07:34:53.074016 1029448 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 07:34:53.079162 1029448 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 07:34:53.079177 1029448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 07:34:53.103386 1029448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 07:34:53.852065 1029448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:34:53.852257 1029448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:34:53.852365 1029448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-env-355914 minikube.k8s.io/updated_at=2025_12_10T07_34_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=force-systemd-env-355914 minikube.k8s.io/primary=true
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:26:47 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:47.043588654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:26:47 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:47.044594566Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" with image id \"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\", repo tag \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a\", size \"22429671\" in 1.66249802s"
	Dec 10 07:26:47 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:47.044641796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" returns image reference \"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\""
	Dec 10 07:26:47 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:47.046402441Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 07:26:48 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:48.365641636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:26:48 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:48.367592306Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=20453241"
	Dec 10 07:26:48 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:48.370227941Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:26:48 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:48.374752100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:26:48 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:48.376201013Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.329751474s"
	Dec 10 07:26:48 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:48.376250310Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 10 07:26:48 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:48.378168372Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 10 07:26:49 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:49.051501901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 10 07:26:49 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:49.053289016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 10 07:26:49 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:49.055567584Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 10 07:26:49 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:49.059130451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 10 07:26:49 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:49.059810813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 681.490249ms"
	Dec 10 07:26:49 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:26:49.059855515Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 10 07:31:40 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:31:40.567731740Z" level=info msg="container event discarded" container=a03c0175e80067ba067897043d1e5f78d9e3d444c818c8d142b053aed18de62c type=CONTAINER_DELETED_EVENT
	Dec 10 07:31:40 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:31:40.582086274Z" level=info msg="container event discarded" container=a3c735caca9fc8c2457b3fdce6f47e8d4138dc346f7003928ccc73f6a70928b6 type=CONTAINER_DELETED_EVENT
	Dec 10 07:31:40 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:31:40.593380273Z" level=info msg="container event discarded" container=a1c978a340801a8ed2f74d0b735b1f4adfeb2db178f572a8c24301f507993ee4 type=CONTAINER_DELETED_EVENT
	Dec 10 07:31:40 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:31:40.593439465Z" level=info msg="container event discarded" container=336a47eabae94db8dcbc24c1334ed4b584ceba7bad445091cb0e528519aa5bc4 type=CONTAINER_DELETED_EVENT
	Dec 10 07:31:40 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:31:40.609768079Z" level=info msg="container event discarded" container=4e8d716b70eb6a3e95c233001504074eaa000d3e4d0d352dcefdd78d866bf732 type=CONTAINER_DELETED_EVENT
	Dec 10 07:31:40 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:31:40.609833762Z" level=info msg="container event discarded" container=40700d91eaa1afbb664f0b67ee3b631cb81b8d50b433fac39279257275d55af6 type=CONTAINER_DELETED_EVENT
	Dec 10 07:31:40 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:31:40.625235594Z" level=info msg="container event discarded" container=ae19757b8d394f4cfa7a41fcef54809c2fb7af60dedd5d64b00c8b318c0516f1 type=CONTAINER_DELETED_EVENT
	Dec 10 07:31:40 kubernetes-upgrade-006690 containerd[555]: time="2025-12-10T07:31:40.625304797Z" level=info msg="container event discarded" container=5fc12a961e8c965f1be11722d2ba60b89d6fb4b026648f62c28011f2249fc95f type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:34:55 up  6:16,  0 user,  load average: 3.89, 2.01, 1.88
	Linux kubernetes-upgrade-006690 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:34:51 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:52 kubernetes-upgrade-006690 kubelet[14564]: E1210 07:34:52.151175   14564 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:52 kubernetes-upgrade-006690 kubelet[14583]: E1210 07:34:52.894354   14583 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:52 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:53 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 07:34:53 kubernetes-upgrade-006690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:53 kubernetes-upgrade-006690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:53 kubernetes-upgrade-006690 kubelet[14588]: E1210 07:34:53.674809   14588 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:53 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:53 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:54 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 07:34:54 kubernetes-upgrade-006690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:54 kubernetes-upgrade-006690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:54 kubernetes-upgrade-006690 kubelet[14608]: E1210 07:34:54.453109   14608 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:54 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:54 kubernetes-upgrade-006690 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-006690 -n kubernetes-upgrade-006690
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-006690 -n kubernetes-upgrade-006690: exit status 2 (529.602036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-006690" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-006690" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-006690
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-006690: (2.734788898s)
--- FAIL: TestKubernetesUpgrade (804.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (512.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m30.650810328s)

                                                
                                                
-- stdout --
	* [no-preload-587009] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-587009" primary control-plane node in "no-preload-587009" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:40:55.863629 1061272 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:40:55.863749 1061272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:40:55.863801 1061272 out.go:374] Setting ErrFile to fd 2...
	I1210 07:40:55.863818 1061272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:40:55.864058 1061272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:40:55.864477 1061272 out.go:368] Setting JSON to false
	I1210 07:40:55.865588 1061272 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22980,"bootTime":1765329476,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:40:55.865661 1061272 start.go:143] virtualization:  
	I1210 07:40:55.869268 1061272 out.go:179] * [no-preload-587009] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:40:55.873325 1061272 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:40:55.873659 1061272 notify.go:221] Checking for updates...
	I1210 07:40:55.878853 1061272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:40:55.881860 1061272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:40:55.884859 1061272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:40:55.887722 1061272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:40:55.890648 1061272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:40:55.894002 1061272 config.go:182] Loaded profile config "embed-certs-254586": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 07:40:55.894115 1061272 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:40:55.925667 1061272 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:40:55.925861 1061272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:40:55.987856 1061272 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:40:55.977925073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:40:55.987966 1061272 docker.go:319] overlay module found
	I1210 07:40:55.991085 1061272 out.go:179] * Using the docker driver based on user configuration
	I1210 07:40:55.993881 1061272 start.go:309] selected driver: docker
	I1210 07:40:55.993912 1061272 start.go:927] validating driver "docker" against <nil>
	I1210 07:40:55.993926 1061272 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:40:55.994819 1061272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:40:56.053662 1061272 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:40:56.043425616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:40:56.053828 1061272 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:40:56.054061 1061272 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:40:56.057047 1061272 out.go:179] * Using Docker driver with root privileges
	I1210 07:40:56.059913 1061272 cni.go:84] Creating CNI manager for ""
	I1210 07:40:56.059988 1061272 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:40:56.060004 1061272 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:40:56.060082 1061272 start.go:353] cluster config:
	{Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:40:56.063305 1061272 out.go:179] * Starting "no-preload-587009" primary control-plane node in "no-preload-587009" cluster
	I1210 07:40:56.066145 1061272 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:40:56.069186 1061272 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:40:56.071986 1061272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:40:56.072004 1061272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:40:56.072134 1061272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json ...
	I1210 07:40:56.072164 1061272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json: {Name:mkde50bf94ffc0ada6964ae54948a2d9158c11f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:40:56.072405 1061272 cache.go:107] acquiring lock: {Name:mkabea6e7b1e77c374f63c9a4d0766be00cc6317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:40:56.072460 1061272 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:40:56.072479 1061272 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80.461µs
	I1210 07:40:56.072498 1061272 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:40:56.072511 1061272 cache.go:107] acquiring lock: {Name:mk64f56a3ea6b87518d3bc512eef54d76035bb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:40:56.072633 1061272 cache.go:107] acquiring lock: {Name:mk88572bf90913c057455c882907a6c4416350fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:40:56.072714 1061272 cache.go:107] acquiring lock: {Name:mk9279f9c659c863cac5b3805141cb5f659d3427 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:40:56.072845 1061272 cache.go:107] acquiring lock: {Name:mkc3e57bbe80791d398050e8951aea73d362d920 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:40:56.073022 1061272 cache.go:107] acquiring lock: {Name:mk89d503b38bf82fa0b7406e77e02d931662720f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:40:56.073336 1061272 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:40:56.073550 1061272 cache.go:107] acquiring lock: {Name:mkde71767452c33eccd8ae2cb3e7952dfc30e95a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:40:56.076411 1061272 cache.go:107] acquiring lock: {Name:mkb61a80f7472bdfd6bbc597d8ce9f0afe659105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:40:56.076663 1061272 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:40:56.076789 1061272 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:40:56.077048 1061272 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 07:40:56.077154 1061272 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 4.133588ms
	I1210 07:40:56.077192 1061272 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 07:40:56.077286 1061272 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:40:56.077645 1061272 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:40:56.077693 1061272 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 4.978713ms
	I1210 07:40:56.078444 1061272 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:40:56.077958 1061272 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:40:56.078326 1061272 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:40:56.080087 1061272 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:40:56.081430 1061272 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:40:56.081738 1061272 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:40:56.082833 1061272 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:40:56.103516 1061272 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:40:56.103537 1061272 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:40:56.103553 1061272 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:40:56.103781 1061272 start.go:360] acquireMachinesLock for no-preload-587009: {Name:mk024fb9ab341e7f6dd2192e8a4fa44e5bf27c0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:40:56.103935 1061272 start.go:364] duration metric: took 130.932µs to acquireMachinesLock for "no-preload-587009"
	I1210 07:40:56.103964 1061272 start.go:93] Provisioning new machine with config: &{Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:40:56.104045 1061272 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:40:56.107685 1061272 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:40:56.107958 1061272 start.go:159] libmachine.API.Create for "no-preload-587009" (driver="docker")
	I1210 07:40:56.108049 1061272 client.go:173] LocalClient.Create starting
	I1210 07:40:56.108161 1061272 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 07:40:56.108204 1061272 main.go:143] libmachine: Decoding PEM data...
	I1210 07:40:56.108249 1061272 main.go:143] libmachine: Parsing certificate...
	I1210 07:40:56.108342 1061272 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 07:40:56.108373 1061272 main.go:143] libmachine: Decoding PEM data...
	I1210 07:40:56.108416 1061272 main.go:143] libmachine: Parsing certificate...
	I1210 07:40:56.108929 1061272 cli_runner.go:164] Run: docker network inspect no-preload-587009 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:40:56.128795 1061272 cli_runner.go:211] docker network inspect no-preload-587009 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:40:56.128885 1061272 network_create.go:284] running [docker network inspect no-preload-587009] to gather additional debugging logs...
	I1210 07:40:56.128905 1061272 cli_runner.go:164] Run: docker network inspect no-preload-587009
	W1210 07:40:56.144891 1061272 cli_runner.go:211] docker network inspect no-preload-587009 returned with exit code 1
	I1210 07:40:56.144919 1061272 network_create.go:287] error running [docker network inspect no-preload-587009]: docker network inspect no-preload-587009: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-587009 not found
	I1210 07:40:56.144933 1061272 network_create.go:289] output of [docker network inspect no-preload-587009]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-587009 not found
	
	** /stderr **
	I1210 07:40:56.145029 1061272 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:40:56.163344 1061272 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 07:40:56.163697 1061272 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 07:40:56.164058 1061272 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 07:40:56.164339 1061272 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a27e07744d6f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:14:25:54:38:89} reservation:<nil>}
	I1210 07:40:56.164841 1061272 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bafd80}
	I1210 07:40:56.164864 1061272 network_create.go:124] attempt to create docker network no-preload-587009 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:40:56.164924 1061272 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-587009 no-preload-587009
	I1210 07:40:56.238622 1061272 network_create.go:108] docker network no-preload-587009 192.168.85.0/24 created
	I1210 07:40:56.238662 1061272 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-587009" container
	I1210 07:40:56.238743 1061272 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:40:56.254947 1061272 cli_runner.go:164] Run: docker volume create no-preload-587009 --label name.minikube.sigs.k8s.io=no-preload-587009 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:40:56.273040 1061272 oci.go:103] Successfully created a docker volume no-preload-587009
	I1210 07:40:56.273125 1061272 cli_runner.go:164] Run: docker run --rm --name no-preload-587009-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-587009 --entrypoint /usr/bin/test -v no-preload-587009:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:40:56.424143 1061272 cache.go:162] opening:  /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1210 07:40:56.441526 1061272 cache.go:162] opening:  /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 07:40:56.450169 1061272 cache.go:162] opening:  /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1210 07:40:56.452698 1061272 cache.go:162] opening:  /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1210 07:40:56.492305 1061272 cache.go:162] opening:  /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1210 07:40:56.853934 1061272 cache.go:157] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1210 07:40:56.853965 1061272 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 781.337234ms
	I1210 07:40:56.853978 1061272 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1210 07:40:56.935661 1061272 oci.go:107] Successfully prepared a docker volume no-preload-587009
	I1210 07:40:56.935708 1061272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1210 07:40:56.935835 1061272 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:40:56.936009 1061272 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:40:56.996783 1061272 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-587009 --name no-preload-587009 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-587009 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-587009 --network no-preload-587009 --ip 192.168.85.2 --volume no-preload-587009:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:40:57.412758 1061272 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Running}}
	I1210 07:40:57.461854 1061272 cache.go:157] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1210 07:40:57.462720 1061272 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.389870167s
	I1210 07:40:57.462754 1061272 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1210 07:40:57.467227 1061272 cache.go:157] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1210 07:40:57.467305 1061272 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.390905867s
	I1210 07:40:57.467333 1061272 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1210 07:40:57.480953 1061272 cache.go:157] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:40:57.481026 1061272 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.407477151s
	I1210 07:40:57.481054 1061272 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:40:57.481175 1061272 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:40:57.516217 1061272 cache.go:157] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1210 07:40:57.516300 1061272 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.443784867s
	I1210 07:40:57.516371 1061272 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1210 07:40:57.516451 1061272 cache.go:87] Successfully saved all images to host disk.
	I1210 07:40:57.517211 1061272 cli_runner.go:164] Run: docker exec no-preload-587009 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:40:57.569549 1061272 oci.go:144] the created container "no-preload-587009" has a running status.
	I1210 07:40:57.569576 1061272 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa...
	I1210 07:40:57.853682 1061272 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:40:57.885651 1061272 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:40:57.914735 1061272 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:40:57.914755 1061272 kic_runner.go:114] Args: [docker exec --privileged no-preload-587009 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:40:57.982586 1061272 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:40:58.003398 1061272 machine.go:94] provisionDockerMachine start ...
	I1210 07:40:58.003500 1061272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:40:58.040107 1061272 main.go:143] libmachine: Using SSH client type: native
	I1210 07:40:58.040469 1061272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33830 <nil> <nil>}
	I1210 07:40:58.040479 1061272 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:40:58.041088 1061272 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37446->127.0.0.1:33830: read: connection reset by peer
	I1210 07:41:01.258209 1061272 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:41:01.258241 1061272 ubuntu.go:182] provisioning hostname "no-preload-587009"
	I1210 07:41:01.258306 1061272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:41:01.286562 1061272 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:01.286914 1061272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33830 <nil> <nil>}
	I1210 07:41:01.286932 1061272 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-587009 && echo "no-preload-587009" | sudo tee /etc/hostname
	I1210 07:41:01.449173 1061272 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:41:01.449265 1061272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:41:01.472906 1061272 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:01.473210 1061272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33830 <nil> <nil>}
	I1210 07:41:01.473226 1061272 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-587009' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-587009/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-587009' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:41:01.626925 1061272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:41:01.626980 1061272 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:41:01.627017 1061272 ubuntu.go:190] setting up certificates
	I1210 07:41:01.627056 1061272 provision.go:84] configureAuth start
	I1210 07:41:01.627134 1061272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:41:01.646152 1061272 provision.go:143] copyHostCerts
	I1210 07:41:01.646230 1061272 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:41:01.646247 1061272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:41:01.646325 1061272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:41:01.646446 1061272 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:41:01.646458 1061272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:41:01.646528 1061272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:41:01.646611 1061272 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:41:01.646625 1061272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:41:01.646652 1061272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:41:01.646716 1061272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.no-preload-587009 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-587009]
	I1210 07:41:02.258954 1061272 provision.go:177] copyRemoteCerts
	I1210 07:41:02.259026 1061272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:41:02.259067 1061272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:41:02.279374 1061272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33830 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:41:02.383629 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:41:02.403885 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:41:02.423344 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:41:02.442198 1061272 provision.go:87] duration metric: took 815.123686ms to configureAuth
	I1210 07:41:02.442223 1061272 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:41:02.442428 1061272 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:02.442436 1061272 machine.go:97] duration metric: took 4.439018829s to provisionDockerMachine
	I1210 07:41:02.442442 1061272 client.go:176] duration metric: took 6.334386467s to LocalClient.Create
	I1210 07:41:02.442542 1061272 start.go:167] duration metric: took 6.334586659s to libmachine.API.Create "no-preload-587009"
	I1210 07:41:02.442551 1061272 start.go:293] postStartSetup for "no-preload-587009" (driver="docker")
	I1210 07:41:02.442563 1061272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:41:02.442614 1061272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:41:02.442657 1061272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:41:02.460413 1061272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33830 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:41:02.559389 1061272 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:41:02.563155 1061272 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:41:02.563183 1061272 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:41:02.563195 1061272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:41:02.563252 1061272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:41:02.563348 1061272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:41:02.563467 1061272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:41:02.571185 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:02.592122 1061272 start.go:296] duration metric: took 149.556496ms for postStartSetup
	I1210 07:41:02.592527 1061272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:41:02.613325 1061272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json ...
	I1210 07:41:02.613617 1061272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:41:02.613668 1061272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:41:02.633048 1061272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33830 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:41:02.727540 1061272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:41:02.732295 1061272 start.go:128] duration metric: took 6.628235631s to createHost
	I1210 07:41:02.732320 1061272 start.go:83] releasing machines lock for "no-preload-587009", held for 6.628374201s
	I1210 07:41:02.732393 1061272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:41:02.754702 1061272 ssh_runner.go:195] Run: cat /version.json
	I1210 07:41:02.754735 1061272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:41:02.754756 1061272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:41:02.754806 1061272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:41:02.774680 1061272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33830 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:41:02.778801 1061272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33830 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:41:02.875104 1061272 ssh_runner.go:195] Run: systemctl --version
	I1210 07:41:02.979941 1061272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:41:02.984179 1061272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:41:02.984254 1061272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:41:03.021477 1061272 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:41:03.021570 1061272 start.go:496] detecting cgroup driver to use...
	I1210 07:41:03.021620 1061272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:41:03.021710 1061272 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:41:03.038131 1061272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:41:03.051941 1061272 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:41:03.052012 1061272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:41:03.072021 1061272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:41:03.091630 1061272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:41:03.218307 1061272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:41:03.349890 1061272 docker.go:234] disabling docker service ...
	I1210 07:41:03.349984 1061272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:41:03.383218 1061272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:41:03.397230 1061272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:41:03.527107 1061272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:41:03.652044 1061272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:41:03.664870 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:41:03.680403 1061272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:41:03.689860 1061272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:41:03.699491 1061272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:41:03.699596 1061272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:41:03.708515 1061272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:03.717527 1061272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:41:03.726675 1061272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:03.735595 1061272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:41:03.748969 1061272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:41:03.757962 1061272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:41:03.767125 1061272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:41:03.776692 1061272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:41:03.785259 1061272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:41:03.792969 1061272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:03.918080 1061272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:41:04.024351 1061272 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:41:04.024446 1061272 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:41:04.029593 1061272 start.go:564] Will wait 60s for crictl version
	I1210 07:41:04.029690 1061272 ssh_runner.go:195] Run: which crictl
	I1210 07:41:04.034038 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:41:04.062456 1061272 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:41:04.062607 1061272 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:04.084298 1061272 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:04.117088 1061272 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:41:04.120110 1061272 cli_runner.go:164] Run: docker network inspect no-preload-587009 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:04.137720 1061272 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:41:04.141845 1061272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:04.152529 1061272 kubeadm.go:884] updating cluster {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:41:04.152651 1061272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:04.152718 1061272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:04.176899 1061272 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1210 07:41:04.176927 1061272 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:41:04.176964 1061272 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:41:04.177169 1061272 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:41:04.177274 1061272 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:41:04.177366 1061272 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:41:04.177468 1061272 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:41:04.177569 1061272 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:41:04.177676 1061272 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:41:04.177777 1061272 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:41:04.178797 1061272 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:41:04.179039 1061272 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:41:04.179348 1061272 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:41:04.179618 1061272 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:41:04.179760 1061272 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:41:04.179887 1061272 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:41:04.180022 1061272 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:41:04.180180 1061272 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:41:04.397459 1061272 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1210 07:41:04.397577 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:41:04.427420 1061272 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 07:41:04.427500 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:41:04.430823 1061272 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1210 07:41:04.430942 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1210 07:41:04.435013 1061272 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1210 07:41:04.435382 1061272 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1210 07:41:04.435498 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:41:04.435670 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:41:04.450289 1061272 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 07:41:04.450359 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 07:41:04.456648 1061272 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1210 07:41:04.456724 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:41:04.462726 1061272 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1210 07:41:04.462770 1061272 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:41:04.462819 1061272 ssh_runner.go:195] Run: which crictl
	I1210 07:41:04.482666 1061272 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 07:41:04.482710 1061272 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:41:04.482759 1061272 ssh_runner.go:195] Run: which crictl
	I1210 07:41:04.482841 1061272 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1210 07:41:04.482858 1061272 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:41:04.482879 1061272 ssh_runner.go:195] Run: which crictl
	I1210 07:41:04.522683 1061272 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1210 07:41:04.522781 1061272 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:41:04.522866 1061272 ssh_runner.go:195] Run: which crictl
	I1210 07:41:04.523029 1061272 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1210 07:41:04.523082 1061272 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:41:04.523142 1061272 ssh_runner.go:195] Run: which crictl
	I1210 07:41:04.523267 1061272 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 07:41:04.523314 1061272 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:41:04.523355 1061272 ssh_runner.go:195] Run: which crictl
	I1210 07:41:04.529396 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:41:04.529513 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:41:04.529585 1061272 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1210 07:41:04.529729 1061272 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:41:04.529636 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:41:04.529862 1061272 ssh_runner.go:195] Run: which crictl
	I1210 07:41:04.535011 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:41:04.535168 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:41:04.535476 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:41:04.609022 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:41:04.609170 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:41:04.612216 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:41:04.612339 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:41:04.683503 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:41:04.683584 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:41:04.683713 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:41:04.703875 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1210 07:41:04.703998 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:41:04.704064 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:41:04.708749 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:41:04.785587 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1210 07:41:04.785716 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:41:04.785809 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1210 07:41:04.846586 1061272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 07:41:04.846841 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:41:04.846679 1061272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1210 07:41:04.846989 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1210 07:41:04.846698 1061272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1210 07:41:04.847121 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:41:04.846766 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1210 07:41:04.872059 1061272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1210 07:41:04.872192 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1210 07:41:04.872261 1061272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1210 07:41:04.872321 1061272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 07:41:04.872496 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1210 07:41:04.872649 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:41:04.897182 1061272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1210 07:41:04.897251 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1210 07:41:04.897319 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1210 07:41:04.897379 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1210 07:41:04.897433 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:41:04.897455 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1210 07:41:04.897205 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 07:41:04.897518 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 07:41:04.897533 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:41:04.897566 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 07:41:04.897603 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1210 07:41:04.897622 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1210 07:41:04.897663 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1210 07:41:04.897699 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1210 07:41:04.972428 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1210 07:41:04.972506 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1210 07:41:05.028749 1061272 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:41:05.028882 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1210 07:41:05.386373 1061272 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 07:41:05.428128 1061272 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1210 07:41:05.428262 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	W1210 07:41:05.484456 1061272 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 07:41:05.484666 1061272 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 07:41:05.484763 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:41:06.787692 1061272 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.302866544s)
	I1210 07:41:06.787771 1061272 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 07:41:06.787809 1061272 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:41:06.787882 1061272 ssh_runner.go:195] Run: which crictl
	I1210 07:41:06.787952 1061272 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.359663002s)
	I1210 07:41:06.787967 1061272 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1210 07:41:06.787983 1061272 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:41:06.788071 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:41:06.793917 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:41:08.294759 1061272 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.500766082s)
	I1210 07:41:08.294831 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:41:08.294724 1061272 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.506611599s)
	I1210 07:41:08.294910 1061272 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 07:41:08.294932 1061272 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1210 07:41:08.294957 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1210 07:41:08.326887 1061272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:41:09.305466 1061272 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.010465436s)
	I1210 07:41:09.305499 1061272 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1210 07:41:09.305518 1061272 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1210 07:41:09.305565 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1210 07:41:09.305642 1061272 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 07:41:09.305714 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:41:10.504339 1061272 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.198600882s)
	I1210 07:41:10.504368 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:41:10.504391 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 07:41:10.504446 1061272 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.198866601s)
	I1210 07:41:10.504455 1061272 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1210 07:41:10.504470 1061272 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:41:10.504508 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:41:11.569219 1061272 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.064689073s)
	I1210 07:41:11.569242 1061272 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 07:41:11.569259 1061272 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1210 07:41:11.569307 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1210 07:41:12.629093 1061272 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.059750388s)
	I1210 07:41:12.629119 1061272 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1210 07:41:12.629139 1061272 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:41:12.629187 1061272 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:41:13.287557 1061272 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 07:41:13.287593 1061272 cache_images.go:125] Successfully loaded all cached images
	I1210 07:41:13.287598 1061272 cache_images.go:94] duration metric: took 9.110656653s to LoadCachedImages
	I1210 07:41:13.287610 1061272 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:41:13.287703 1061272 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-587009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:41:13.287765 1061272 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:41:13.315687 1061272 cni.go:84] Creating CNI manager for ""
	I1210 07:41:13.315712 1061272 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:13.315734 1061272 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:41:13.315759 1061272 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-587009 NodeName:no-preload-587009 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:41:13.315912 1061272 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-587009"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:41:13.315985 1061272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:41:13.324657 1061272 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1210 07:41:13.324723 1061272 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:41:13.333355 1061272 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1210 07:41:13.334145 1061272 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1210 07:41:13.334271 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1210 07:41:13.334830 1061272 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1210 07:41:13.340588 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1210 07:41:13.340629 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1210 07:41:14.112453 1061272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:41:14.145428 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1210 07:41:14.157508 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1210 07:41:14.157562 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1210 07:41:14.380815 1061272 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1210 07:41:14.414825 1061272 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1210 07:41:14.414905 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1210 07:41:14.983593 1061272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:41:14.992728 1061272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:41:15.016016 1061272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:41:15.037174 1061272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 07:41:15.056484 1061272 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:41:15.064503 1061272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:15.079193 1061272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:15.248579 1061272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:41:15.269478 1061272 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009 for IP: 192.168.85.2
	I1210 07:41:15.269496 1061272 certs.go:195] generating shared ca certs ...
	I1210 07:41:15.269511 1061272 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:15.269649 1061272 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:41:15.269692 1061272 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:41:15.269699 1061272 certs.go:257] generating profile certs ...
	I1210 07:41:15.269753 1061272 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.key
	I1210 07:41:15.269764 1061272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.crt with IP's: []
	I1210 07:41:15.436688 1061272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.crt ...
	I1210 07:41:15.436721 1061272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.crt: {Name:mk017855c33f7e9c05870d92a5bd96fbe91c087d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:15.437488 1061272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.key ...
	I1210 07:41:15.437506 1061272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.key: {Name:mk792f7499496dce0a2301f0054a9e39403e205d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:15.438162 1061272 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key.841ee17a
	I1210 07:41:15.438186 1061272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt.841ee17a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 07:41:15.595929 1061272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt.841ee17a ...
	I1210 07:41:15.596038 1061272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt.841ee17a: {Name:mk92c6f00e0ec640c6155f51bd046904035f3807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:15.596331 1061272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key.841ee17a ...
	I1210 07:41:15.596405 1061272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key.841ee17a: {Name:mkc4af706ae099cddc009d56b95742c777f014cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:15.597137 1061272 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt.841ee17a -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt
	I1210 07:41:15.597336 1061272 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key.841ee17a -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key
	I1210 07:41:15.597478 1061272 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key
	I1210 07:41:15.597541 1061272 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.crt with IP's: []
	I1210 07:41:16.248661 1061272 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.crt ...
	I1210 07:41:16.248690 1061272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.crt: {Name:mk74b3ffaaa8865b621740e6677267d3a413321b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:16.248875 1061272 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key ...
	I1210 07:41:16.248893 1061272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key: {Name:mkf12137f153cddad4a373253979a874274b38f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:16.249111 1061272 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:41:16.249174 1061272 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:41:16.249189 1061272 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:41:16.249219 1061272 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:41:16.249251 1061272 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:41:16.249280 1061272 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:41:16.249329 1061272 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:16.249894 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:41:16.270919 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:41:16.293184 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:41:16.316083 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:41:16.349540 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:41:16.372094 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:41:16.415624 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:41:16.441108 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:41:16.460598 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:41:16.479802 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:41:16.500597 1061272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:41:16.521645 1061272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:41:16.540885 1061272 ssh_runner.go:195] Run: openssl version
	I1210 07:41:16.548790 1061272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:41:16.560479 1061272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:41:16.573004 1061272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:41:16.583905 1061272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:41:16.584061 1061272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:41:16.641165 1061272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:41:16.654458 1061272 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 07:41:16.663452 1061272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:41:16.674177 1061272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:41:16.682975 1061272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:41:16.693187 1061272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:41:16.693255 1061272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:41:16.789296 1061272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:16.824244 1061272 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:16.848894 1061272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:16.873192 1061272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:41:16.892233 1061272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:16.900295 1061272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:16.900376 1061272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:16.960077 1061272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:41:16.969889 1061272 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:41:16.979555 1061272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:41:16.986537 1061272 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:41:16.986587 1061272 kubeadm.go:401] StartCluster: {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:16.986659 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:41:16.986719 1061272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:41:17.032719 1061272 cri.go:89] found id: ""
	I1210 07:41:17.032787 1061272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:41:17.046634 1061272 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:41:17.059657 1061272 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:41:17.059777 1061272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:41:17.073177 1061272 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:41:17.073248 1061272 kubeadm.go:158] found existing configuration files:
	
	I1210 07:41:17.073331 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:41:17.087443 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:41:17.087599 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:41:17.106232 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:41:17.123438 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:41:17.123509 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:41:17.140589 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:41:17.157294 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:41:17.157362 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:41:17.168048 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:41:17.183124 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:41:17.183185 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:41:17.196037 1061272 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:41:17.253080 1061272 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:41:17.253473 1061272 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:41:17.378288 1061272 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:41:17.378366 1061272 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:41:17.378408 1061272 kubeadm.go:319] OS: Linux
	I1210 07:41:17.378482 1061272 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:41:17.378575 1061272 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:41:17.378631 1061272 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:41:17.378679 1061272 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:41:17.378733 1061272 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:41:17.378787 1061272 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:41:17.378836 1061272 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:41:17.378895 1061272 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:41:17.378952 1061272 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:41:17.489971 1061272 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:41:17.490089 1061272 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:41:17.490186 1061272 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:41:17.506858 1061272 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:41:17.513191 1061272 out.go:252]   - Generating certificates and keys ...
	I1210 07:41:17.513301 1061272 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:41:17.513374 1061272 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:41:18.414297 1061272 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:41:18.705436 1061272 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:41:19.583482 1061272 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:41:19.668274 1061272 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:41:19.952249 1061272 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:41:19.952396 1061272 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:41:20.197979 1061272 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:41:20.198362 1061272 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:41:20.303759 1061272 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:41:20.429514 1061272 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:41:20.621346 1061272 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:41:20.621654 1061272 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:41:20.719661 1061272 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:41:21.330818 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:21.932613 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:22.466827 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:22.874611 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:22.875806 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:22.880368 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:22.884509 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:41:22.884617 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:22.884696 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:22.885635 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:22.908007 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:22.908118 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:22.919248 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:22.924362 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:22.924418 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:23.105604 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:23.105729 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:45:23.105552 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000403225s
	I1210 07:45:23.105596 1061272 kubeadm.go:319] 
	I1210 07:45:23.105659 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:23.105695 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:23.105810 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:23.105817 1061272 kubeadm.go:319] 
	I1210 07:45:23.105931 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:23.105968 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:23.106003 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:23.106008 1061272 kubeadm.go:319] 
	I1210 07:45:23.110089 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.110529 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.110638 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:23.110873 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:23.110878 1061272 kubeadm.go:319] 
	I1210 07:45:23.110946 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:23.111048 1061272 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000403225s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000403225s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:23.111129 1061272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:23.528980 1061272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:23.543064 1061272 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:23.543133 1061272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:23.552680 1061272 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:23.552702 1061272 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:23.552757 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:23.561132 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:23.561196 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:23.569220 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:23.577552 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:23.577617 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:23.585736 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.594195 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:23.594261 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.602367 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:23.610802 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:23.610868 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:23.618934 1061272 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:23.738244 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.738666 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.820302 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:49:26.015309 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:49:26.015352 1061272 kubeadm.go:319] 
	I1210 07:49:26.015478 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:49:26.021506 1061272 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:49:26.021573 1061272 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:49:26.021669 1061272 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:49:26.021735 1061272 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:49:26.021780 1061272 kubeadm.go:319] OS: Linux
	I1210 07:49:26.021833 1061272 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:49:26.021898 1061272 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:49:26.021954 1061272 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:49:26.022012 1061272 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:49:26.022072 1061272 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:49:26.022130 1061272 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:49:26.022183 1061272 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:49:26.022239 1061272 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:49:26.022294 1061272 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:49:26.022377 1061272 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:49:26.022520 1061272 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:49:26.022665 1061272 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:49:26.022797 1061272 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:49:26.025625 1061272 out.go:252]   - Generating certificates and keys ...
	I1210 07:49:26.025738 1061272 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:49:26.025820 1061272 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:49:26.025909 1061272 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:49:26.025981 1061272 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:49:26.026084 1061272 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:49:26.026145 1061272 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:49:26.026218 1061272 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:49:26.026288 1061272 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:49:26.026372 1061272 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:49:26.026456 1061272 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:49:26.026527 1061272 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:49:26.026596 1061272 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:49:26.026658 1061272 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:49:26.026731 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:49:26.026814 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:49:26.026910 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:49:26.027000 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:49:26.027123 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:49:26.027217 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:49:26.032204 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:49:26.032327 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:49:26.032449 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:49:26.032535 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:49:26.032660 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:49:26.032760 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:49:26.032871 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:49:26.032963 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:49:26.033008 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:49:26.033144 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:49:26.033252 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:49:26.033319 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00018658s
	I1210 07:49:26.033356 1061272 kubeadm.go:319] 
	I1210 07:49:26.033430 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:49:26.033471 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:49:26.033578 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:49:26.033591 1061272 kubeadm.go:319] 
	I1210 07:49:26.033695 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:49:26.033732 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:49:26.033765 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:49:26.033838 1061272 kubeadm.go:403] duration metric: took 8m9.047256448s to StartCluster
	I1210 07:49:26.033878 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:49:26.033967 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:49:26.034180 1061272 kubeadm.go:319] 
	I1210 07:49:26.078012 1061272 cri.go:89] found id: ""
	I1210 07:49:26.078053 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.078063 1061272 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:49:26.078088 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:49:26.078174 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:49:26.106609 1061272 cri.go:89] found id: ""
	I1210 07:49:26.106637 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.106653 1061272 logs.go:284] No container was found matching "etcd"
	I1210 07:49:26.106660 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:49:26.106763 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:49:26.132553 1061272 cri.go:89] found id: ""
	I1210 07:49:26.132579 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.132589 1061272 logs.go:284] No container was found matching "coredns"
	I1210 07:49:26.132595 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:49:26.132657 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:49:26.159729 1061272 cri.go:89] found id: ""
	I1210 07:49:26.159779 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.159789 1061272 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:49:26.159797 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:49:26.159864 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:49:26.185308 1061272 cri.go:89] found id: ""
	I1210 07:49:26.185386 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.185409 1061272 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:49:26.185430 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:49:26.185524 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:49:26.210297 1061272 cri.go:89] found id: ""
	I1210 07:49:26.210364 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.210388 1061272 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:49:26.210409 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:49:26.210538 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:49:26.235247 1061272 cri.go:89] found id: ""
	I1210 07:49:26.235320 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.235341 1061272 logs.go:284] No container was found matching "kindnet"
	I1210 07:49:26.235352 1061272 logs.go:123] Gathering logs for kubelet ...
	I1210 07:49:26.235364 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:49:26.292545 1061272 logs.go:123] Gathering logs for dmesg ...
	I1210 07:49:26.292580 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:49:26.309666 1061272 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:49:26.309695 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:49:26.371886 1061272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:49:26.371909 1061272 logs.go:123] Gathering logs for containerd ...
	I1210 07:49:26.371922 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:49:26.414122 1061272 logs.go:123] Gathering logs for container status ...
	I1210 07:49:26.414158 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:49:26.443108 1061272 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:49:26.443165 1061272 out.go:285] * 
	* 
	W1210 07:49:26.443224 1061272 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.443242 1061272 out.go:285] * 
	* 
	W1210 07:49:26.445452 1061272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:49:26.452172 1061272 out.go:203] 
	W1210 07:49:26.455094 1061272 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.455136 1061272 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:49:26.455159 1061272 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:49:26.458257 1061272 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-587009
helpers_test.go:244: (dbg) docker inspect no-preload-587009:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	        "Created": "2025-12-10T07:40:57.013300176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1061581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:40:57.085196071Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hosts",
	        "LogPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf-json.log",
	        "Name": "/no-preload-587009",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-587009:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-587009",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	                "LowerDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-587009",
	                "Source": "/var/lib/docker/volumes/no-preload-587009/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-587009",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-587009",
	                "name.minikube.sigs.k8s.io": "no-preload-587009",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8b83fbfc75ea1d8c820bf3d9633eb7375349335312aed9e093d5e02998fdbe5",
	            "SandboxKey": "/var/run/docker/netns/c8b83fbfc75e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-587009": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:b3:8b:9d:de:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93aee88a5c37e6ba01b74d7794f328193d01cfce9cb66379ab00b3f3e2e73a48",
	                    "EndpointID": "5db24e5622527f7835e680ba82c923c3693544dd67ec75d3b13b6f9a54598147",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-587009",
	                        "59d9b9413fb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009: exit status 6 (342.983168ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:49:26.894997 1073817 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-587009 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:39 UTC │
	│ start   │ -p cert-expiration-611923 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-611923       │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:38 UTC │
	│ delete  │ -p cert-expiration-611923                                                                                                                                                                                                                                  │ cert-expiration-611923       │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:38 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p default-k8s-diff-port-444518 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ start   │ -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p embed-certs-254586 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-254586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:41:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:41:22.419570 1064794 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:41:22.419770 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421060 1064794 out.go:374] Setting ErrFile to fd 2...
	I1210 07:41:22.421091 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421504 1064794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:41:22.422065 1064794 out.go:368] Setting JSON to false
	I1210 07:41:22.423214 1064794 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23007,"bootTime":1765329476,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:41:22.423323 1064794 start.go:143] virtualization:  
	I1210 07:41:22.427718 1064794 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:41:22.431355 1064794 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:41:22.431601 1064794 notify.go:221] Checking for updates...
	I1210 07:41:22.438163 1064794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:41:22.441491 1064794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:41:22.444751 1064794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:41:22.447902 1064794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:41:22.451150 1064794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:41:22.454892 1064794 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:22.454990 1064794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:41:22.505215 1064794 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:41:22.505348 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.599796 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.587493789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.599897 1064794 docker.go:319] overlay module found
	I1210 07:41:22.603335 1064794 out.go:179] * Using the docker driver based on user configuration
	I1210 07:41:22.606318 1064794 start.go:309] selected driver: docker
	I1210 07:41:22.606341 1064794 start.go:927] validating driver "docker" against <nil>
	I1210 07:41:22.606356 1064794 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:41:22.607143 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.691557 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.681931889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.691722 1064794 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 07:41:22.691759 1064794 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 07:41:22.691991 1064794 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:41:22.702590 1064794 out.go:179] * Using Docker driver with root privileges
	I1210 07:41:22.705586 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:22.705658 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:22.705667 1064794 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:41:22.705758 1064794 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:22.709067 1064794 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:41:22.711942 1064794 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:41:22.714970 1064794 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:41:22.717872 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:22.717929 1064794 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:41:22.717943 1064794 cache.go:65] Caching tarball of preloaded images
	I1210 07:41:22.718049 1064794 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:41:22.718059 1064794 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:41:22.718202 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:22.718221 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json: {Name:mk35831d9cdfb4ee294c317ea1250d3c633e2dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:22.718581 1064794 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:41:22.747190 1064794 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:41:22.747214 1064794 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:41:22.747228 1064794 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:41:22.747259 1064794 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:41:22.747944 1064794 start.go:364] duration metric: took 667.063µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:41:22.747984 1064794 start.go:93] Provisioning new machine with config: &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:41:22.748068 1064794 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:41:21.330818 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:21.932613 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:22.466827 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:22.874611 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:22.875806 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:22.880368 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:22.884509 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:41:22.884617 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:22.884696 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:22.885635 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:22.908007 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:22.908118 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:22.919248 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:22.924362 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:22.924418 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:23.105604 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:23.105729 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:41:22.751504 1064794 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:41:22.751752 1064794 start.go:159] libmachine.API.Create for "newest-cni-237317" (driver="docker")
	I1210 07:41:22.751794 1064794 client.go:173] LocalClient.Create starting
	I1210 07:41:22.751869 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 07:41:22.751907 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.751924 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.751982 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 07:41:22.751999 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.752011 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.752421 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:41:22.771298 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:41:22.771401 1064794 network_create.go:284] running [docker network inspect newest-cni-237317] to gather additional debugging logs...
	I1210 07:41:22.771420 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317
	W1210 07:41:22.796107 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 returned with exit code 1
	I1210 07:41:22.796138 1064794 network_create.go:287] error running [docker network inspect newest-cni-237317]: docker network inspect newest-cni-237317: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-237317 not found
	I1210 07:41:22.796157 1064794 network_create.go:289] output of [docker network inspect newest-cni-237317]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-237317 not found
	
	** /stderr **
	I1210 07:41:22.796260 1064794 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:22.817585 1064794 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 07:41:22.818052 1064794 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 07:41:22.818535 1064794 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 07:41:22.819200 1064794 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189ad90}
	I1210 07:41:22.819255 1064794 network_create.go:124] attempt to create docker network newest-cni-237317 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:41:22.819429 1064794 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-237317 newest-cni-237317
	I1210 07:41:22.888302 1064794 network_create.go:108] docker network newest-cni-237317 192.168.76.0/24 created
	I1210 07:41:22.888339 1064794 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-237317" container
	I1210 07:41:22.888413 1064794 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:41:22.905595 1064794 cli_runner.go:164] Run: docker volume create newest-cni-237317 --label name.minikube.sigs.k8s.io=newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:41:22.928697 1064794 oci.go:103] Successfully created a docker volume newest-cni-237317
	I1210 07:41:22.928792 1064794 cli_runner.go:164] Run: docker run --rm --name newest-cni-237317-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --entrypoint /usr/bin/test -v newest-cni-237317:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:41:23.496836 1064794 oci.go:107] Successfully prepared a docker volume newest-cni-237317
	I1210 07:41:23.496907 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:23.496920 1064794 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:41:23.497004 1064794 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:41:27.695198 1064794 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.198140987s)
	I1210 07:41:27.695236 1064794 kic.go:203] duration metric: took 4.198307373s to extract preloaded images to volume ...
	W1210 07:41:27.695375 1064794 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:41:27.695491 1064794 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:41:27.749812 1064794 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-237317 --name newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-237317 --network newest-cni-237317 --ip 192.168.76.2 --volume newest-cni-237317:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:41:28.033415 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Running}}
	I1210 07:41:28.055793 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.079642 1064794 cli_runner.go:164] Run: docker exec newest-cni-237317 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:41:28.133214 1064794 oci.go:144] the created container "newest-cni-237317" has a running status.
	I1210 07:41:28.133248 1064794 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa...
	I1210 07:41:28.633820 1064794 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:41:28.653829 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.671371 1064794 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:41:28.671397 1064794 kic_runner.go:114] Args: [docker exec --privileged newest-cni-237317 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:41:28.713692 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.729850 1064794 machine.go:94] provisionDockerMachine start ...
	I1210 07:41:28.729960 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:28.748329 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:28.748679 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:28.748697 1064794 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:41:28.749343 1064794 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:41:31.886152 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:31.886178 1064794 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:41:31.886283 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:31.903879 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:31.904204 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:31.904222 1064794 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:41:32.048555 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:32.048637 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.067055 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:32.067377 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:32.067401 1064794 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:41:32.202608 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:41:32.202637 1064794 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:41:32.202667 1064794 ubuntu.go:190] setting up certificates
	I1210 07:41:32.202678 1064794 provision.go:84] configureAuth start
	I1210 07:41:32.202744 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.219337 1064794 provision.go:143] copyHostCerts
	I1210 07:41:32.219404 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:41:32.219420 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:41:32.219497 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:41:32.219602 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:41:32.219616 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:41:32.219646 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:41:32.219709 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:41:32.219718 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:41:32.219745 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:41:32.219807 1064794 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:41:32.533791 1064794 provision.go:177] copyRemoteCerts
	I1210 07:41:32.533865 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:41:32.533934 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.551601 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.650073 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:41:32.667141 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:41:32.684669 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:41:32.702081 1064794 provision.go:87] duration metric: took 499.382435ms to configureAuth
	I1210 07:41:32.702111 1064794 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:41:32.702312 1064794 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:32.702326 1064794 machine.go:97] duration metric: took 3.972452975s to provisionDockerMachine
	I1210 07:41:32.702334 1064794 client.go:176] duration metric: took 9.950533371s to LocalClient.Create
	I1210 07:41:32.702347 1064794 start.go:167] duration metric: took 9.950596741s to libmachine.API.Create "newest-cni-237317"
	I1210 07:41:32.702357 1064794 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:41:32.702367 1064794 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:41:32.702426 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:41:32.702514 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.718852 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.814355 1064794 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:41:32.817769 1064794 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:41:32.817798 1064794 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:41:32.817811 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:41:32.817871 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:41:32.817953 1064794 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:41:32.818081 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:41:32.825310 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:32.842517 1064794 start.go:296] duration metric: took 140.145403ms for postStartSetup
	I1210 07:41:32.842887 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.859215 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:32.859502 1064794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:41:32.859553 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.875883 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.967611 1064794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:41:32.972359 1064794 start.go:128] duration metric: took 10.224272788s to createHost
	I1210 07:41:32.972384 1064794 start.go:83] releasing machines lock for "newest-cni-237317", held for 10.224421419s
	I1210 07:41:32.972457 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.990273 1064794 ssh_runner.go:195] Run: cat /version.json
	I1210 07:41:32.990351 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.990655 1064794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:41:32.990729 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:33.013202 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.031539 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.114754 1064794 ssh_runner.go:195] Run: systemctl --version
	I1210 07:41:33.211079 1064794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:41:33.215428 1064794 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:41:33.215545 1064794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:41:33.242581 1064794 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:41:33.242622 1064794 start.go:496] detecting cgroup driver to use...
	I1210 07:41:33.242657 1064794 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:41:33.242740 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:41:33.257818 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:41:33.270562 1064794 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:41:33.270659 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:41:33.288766 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:41:33.307284 1064794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:41:33.417555 1064794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:41:33.559224 1064794 docker.go:234] disabling docker service ...
	I1210 07:41:33.559382 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:41:33.583026 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:41:33.596320 1064794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:41:33.714101 1064794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:41:33.838575 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:41:33.853369 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:41:33.868162 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:41:33.876869 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:41:33.885636 1064794 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:41:33.885711 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:41:33.894404 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.903504 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:41:33.912288 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.920951 1064794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:41:33.929214 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:41:33.938205 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:41:33.947047 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:41:33.955864 1064794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:41:33.963242 1064794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:41:33.970548 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.113023 1064794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:41:34.252751 1064794 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:41:34.252855 1064794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:41:34.256875 1064794 start.go:564] Will wait 60s for crictl version
	I1210 07:41:34.256993 1064794 ssh_runner.go:195] Run: which crictl
	I1210 07:41:34.260563 1064794 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:41:34.285437 1064794 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:41:34.285530 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.307510 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.335239 1064794 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:41:34.338330 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:34.356185 1064794 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:41:34.360231 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.373151 1064794 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:41:34.376063 1064794 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:41:34.376220 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:34.376306 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.404402 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.404424 1064794 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:41:34.404484 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.432485 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.432510 1064794 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:41:34.432518 1064794 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:41:34.432610 1064794 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:41:34.432688 1064794 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:41:34.457473 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:34.457499 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:34.457517 1064794 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:41:34.457543 1064794 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:41:34.457665 1064794 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:41:34.457735 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:41:34.465701 1064794 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:41:34.465807 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:41:34.473755 1064794 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:41:34.486983 1064794 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:41:34.499868 1064794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:41:34.513272 1064794 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:41:34.517130 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.527569 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.663379 1064794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:41:34.680375 1064794 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:41:34.680450 1064794 certs.go:195] generating shared ca certs ...
	I1210 07:41:34.680483 1064794 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.680674 1064794 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:41:34.680764 1064794 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:41:34.680789 1064794 certs.go:257] generating profile certs ...
	I1210 07:41:34.680884 1064794 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:41:34.680928 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt with IP's: []
	I1210 07:41:34.839451 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt ...
	I1210 07:41:34.839486 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt: {Name:mk864b17e4815ee03fc5eadc45f8f3d330d86e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.839718 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key ...
	I1210 07:41:34.839736 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key: {Name:mkac75ec3f8c520b4be98288202003aea88a7881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.840557 1064794 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:41:34.840584 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 07:41:34.941668 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f ...
	I1210 07:41:34.941702 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f: {Name:mkd71b8623c8311dc88c663a4045d0b1945deec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941880 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f ...
	I1210 07:41:34.941896 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f: {Name:mk237d8326178abb6dfc7e4dd919116ec45ea9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941986 1064794 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt
	I1210 07:41:34.942080 1064794 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key
	I1210 07:41:34.942146 1064794 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:41:34.942168 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt with IP's: []
	I1210 07:41:35.425873 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt ...
	I1210 07:41:35.425915 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt: {Name:mk51f419728d59ba7ab729d028e45d36640d0231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426770 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key ...
	I1210 07:41:35.426789 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key: {Name:mkb1e5352ebb3c5d51e6e8aed5c36263957e6d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426995 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:41:35.427044 1064794 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:41:35.427053 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:41:35.427086 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:41:35.427120 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:41:35.427152 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:41:35.427211 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:35.427806 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:41:35.447027 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:41:35.465831 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:41:35.484250 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:41:35.503105 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:41:35.521550 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:41:35.540269 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:41:35.563542 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:41:35.585738 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:41:35.609714 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:41:35.628843 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:41:35.647443 1064794 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:41:35.660385 1064794 ssh_runner.go:195] Run: openssl version
	I1210 07:41:35.666967 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.674504 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:41:35.682195 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.685889 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.686015 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.727961 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:41:35.735327 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 07:41:35.742756 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.750368 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:41:35.757661 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761188 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761251 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.802868 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.810435 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.817738 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.825308 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:41:35.832635 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836310 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836372 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.877289 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:41:35.884982 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:41:35.892614 1064794 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:41:35.896331 1064794 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:41:35.896383 1064794 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:35.896476 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:41:35.896540 1064794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:41:35.926036 1064794 cri.go:89] found id: ""
	I1210 07:41:35.926112 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:41:35.934414 1064794 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:41:35.942276 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:41:35.942375 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:41:35.950211 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:41:35.950233 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:41:35.950309 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:41:35.957845 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:41:35.957962 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:41:35.966039 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:41:35.973562 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:41:35.973662 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:41:35.980914 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.988697 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:41:35.988772 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.996172 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:41:36.005049 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:41:36.005181 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:41:36.014173 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:41:36.062406 1064794 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:41:36.062713 1064794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:41:36.147250 1064794 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:41:36.147383 1064794 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:41:36.147459 1064794 kubeadm.go:319] OS: Linux
	I1210 07:41:36.147537 1064794 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:41:36.147617 1064794 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:41:36.147688 1064794 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:41:36.147759 1064794 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:41:36.147834 1064794 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:41:36.147902 1064794 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:41:36.147977 1064794 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:41:36.148046 1064794 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:41:36.148124 1064794 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:41:36.223682 1064794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:41:36.223913 1064794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:41:36.224078 1064794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:41:36.230559 1064794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:41:36.237069 1064794 out.go:252]   - Generating certificates and keys ...
	I1210 07:41:36.237182 1064794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:41:36.237257 1064794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:41:36.476610 1064794 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:41:36.561778 1064794 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:41:36.854281 1064794 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:41:37.263690 1064794 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:41:37.370103 1064794 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:41:37.370484 1064794 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:37.933573 1064794 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:41:37.934013 1064794 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:38.192710 1064794 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:41:38.352643 1064794 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:41:38.587081 1064794 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:41:38.587306 1064794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:41:38.909718 1064794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:41:39.048089 1064794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:39.097056 1064794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:39.169471 1064794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:39.365961 1064794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:39.366635 1064794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:39.369209 1064794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:39.372794 1064794 out.go:252]   - Booting up control plane ...
	I1210 07:41:39.372894 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:39.372969 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:39.374027 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:39.390260 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:39.390694 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:39.397555 1064794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:39.397883 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:39.397929 1064794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:39.536450 1064794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:39.536565 1064794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:45:23.105552 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000403225s
	I1210 07:45:23.105596 1061272 kubeadm.go:319] 
	I1210 07:45:23.105659 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:23.105695 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:23.105810 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:23.105817 1061272 kubeadm.go:319] 
	I1210 07:45:23.105931 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:23.105968 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:23.106003 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:23.106008 1061272 kubeadm.go:319] 
	I1210 07:45:23.110089 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.110529 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.110638 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:23.110873 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:23.110878 1061272 kubeadm.go:319] 
	I1210 07:45:23.110946 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:23.111048 1061272 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000403225s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:23.111129 1061272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:23.528980 1061272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:23.543064 1061272 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:23.543133 1061272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:23.552680 1061272 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:23.552702 1061272 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:23.552757 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:23.561132 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:23.561196 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:23.569220 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:23.577552 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:23.577617 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:23.585736 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.594195 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:23.594261 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.602367 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:23.610802 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:23.610868 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:23.618934 1061272 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:23.738244 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.738666 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.820302 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.537616 1064794 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001129575s
	I1210 07:45:39.537650 1064794 kubeadm.go:319] 
	I1210 07:45:39.537709 1064794 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:39.537747 1064794 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:39.537857 1064794 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:39.537866 1064794 kubeadm.go:319] 
	I1210 07:45:39.537971 1064794 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:39.538008 1064794 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:39.538043 1064794 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:39.538052 1064794 kubeadm.go:319] 
	I1210 07:45:39.542860 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:39.543622 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:39.543819 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.544243 1064794 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:39.544260 1064794 kubeadm.go:319] 
	I1210 07:45:39.544379 1064794 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:39.544512 1064794 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129575s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:39.544665 1064794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:40.003717 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:40.026427 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:40.026565 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:40.036588 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:40.036615 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:40.036678 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:40.045938 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:40.046015 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:40.054590 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:40.063126 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:40.063204 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:40.071408 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.079679 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:40.079771 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.088102 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:40.097134 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:40.097216 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:40.105436 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:40.222290 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:40.222807 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:40.298915 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:49:26.015309 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:49:26.015352 1061272 kubeadm.go:319] 
	I1210 07:49:26.015478 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:49:26.021506 1061272 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:49:26.021573 1061272 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:49:26.021669 1061272 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:49:26.021735 1061272 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:49:26.021780 1061272 kubeadm.go:319] OS: Linux
	I1210 07:49:26.021833 1061272 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:49:26.021898 1061272 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:49:26.021954 1061272 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:49:26.022012 1061272 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:49:26.022072 1061272 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:49:26.022130 1061272 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:49:26.022183 1061272 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:49:26.022239 1061272 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:49:26.022294 1061272 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:49:26.022377 1061272 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:49:26.022520 1061272 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:49:26.022665 1061272 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:49:26.022797 1061272 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:49:26.025625 1061272 out.go:252]   - Generating certificates and keys ...
	I1210 07:49:26.025738 1061272 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:49:26.025820 1061272 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:49:26.025909 1061272 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:49:26.025981 1061272 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:49:26.026084 1061272 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:49:26.026145 1061272 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:49:26.026218 1061272 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:49:26.026288 1061272 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:49:26.026372 1061272 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:49:26.026456 1061272 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:49:26.026527 1061272 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:49:26.026596 1061272 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:49:26.026658 1061272 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:49:26.026731 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:49:26.026814 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:49:26.026910 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:49:26.027000 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:49:26.027123 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:49:26.027217 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:49:26.032204 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:49:26.032327 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:49:26.032449 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:49:26.032535 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:49:26.032660 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:49:26.032760 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:49:26.032871 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:49:26.032963 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:49:26.033008 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:49:26.033144 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:49:26.033252 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:49:26.033319 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00018658s
	I1210 07:49:26.033356 1061272 kubeadm.go:319] 
	I1210 07:49:26.033430 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:49:26.033471 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:49:26.033578 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:49:26.033591 1061272 kubeadm.go:319] 
	I1210 07:49:26.033695 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:49:26.033732 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:49:26.033765 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:49:26.033838 1061272 kubeadm.go:403] duration metric: took 8m9.047256448s to StartCluster
	I1210 07:49:26.033878 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:49:26.033967 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:49:26.034180 1061272 kubeadm.go:319] 
	I1210 07:49:26.078012 1061272 cri.go:89] found id: ""
	I1210 07:49:26.078053 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.078063 1061272 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:49:26.078088 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:49:26.078174 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:49:26.106609 1061272 cri.go:89] found id: ""
	I1210 07:49:26.106637 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.106653 1061272 logs.go:284] No container was found matching "etcd"
	I1210 07:49:26.106660 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:49:26.106763 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:49:26.132553 1061272 cri.go:89] found id: ""
	I1210 07:49:26.132579 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.132589 1061272 logs.go:284] No container was found matching "coredns"
	I1210 07:49:26.132595 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:49:26.132657 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:49:26.159729 1061272 cri.go:89] found id: ""
	I1210 07:49:26.159779 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.159789 1061272 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:49:26.159797 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:49:26.159864 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:49:26.185308 1061272 cri.go:89] found id: ""
	I1210 07:49:26.185386 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.185409 1061272 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:49:26.185430 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:49:26.185524 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:49:26.210297 1061272 cri.go:89] found id: ""
	I1210 07:49:26.210364 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.210388 1061272 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:49:26.210409 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:49:26.210538 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:49:26.235247 1061272 cri.go:89] found id: ""
	I1210 07:49:26.235320 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.235341 1061272 logs.go:284] No container was found matching "kindnet"
	I1210 07:49:26.235352 1061272 logs.go:123] Gathering logs for kubelet ...
	I1210 07:49:26.235364 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:49:26.292545 1061272 logs.go:123] Gathering logs for dmesg ...
	I1210 07:49:26.292580 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:49:26.309666 1061272 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:49:26.309695 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:49:26.371886 1061272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:49:26.371909 1061272 logs.go:123] Gathering logs for containerd ...
	I1210 07:49:26.371922 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:49:26.414122 1061272 logs.go:123] Gathering logs for container status ...
	I1210 07:49:26.414158 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:49:26.443108 1061272 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:49:26.443165 1061272 out.go:285] * 
	W1210 07:49:26.443224 1061272 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.443242 1061272 out.go:285] * 
	W1210 07:49:26.445452 1061272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:49:26.452172 1061272 out.go:203] 
	W1210 07:49:26.455094 1061272 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.455136 1061272 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:49:26.455159 1061272 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:49:26.458257 1061272 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:41:06 no-preload-587009 containerd[758]: time="2025-12-10T07:41:06.789083088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.284850821Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.287114833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.296151055Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.297726981Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.295008191Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.297291871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.305236801Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.313440846Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.490269450Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.493135235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.503850918Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.504417343Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.559054031Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.561269122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.569663283Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.570266705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.618033993Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.620356878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.629513282Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.630204657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.276669096Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.278998807Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.285987103Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.286306090Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:27.538062    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:27.538545    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:27.540276    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:27.540780    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:27.544517    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:49:27 up  6:31,  0 user,  load average: 0.34, 1.01, 1.70
	Linux no-preload-587009 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:49:24 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:25 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 07:49:25 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:25 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:25 no-preload-587009 kubelet[5336]: E1210 07:49:25.340472    5336 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:25 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:25 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 07:49:26 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:26 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:26 no-preload-587009 kubelet[5347]: E1210 07:49:26.109360    5347 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 07:49:26 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:26 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:26 no-preload-587009 kubelet[5440]: E1210 07:49:26.866548    5440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:27 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 07:49:27 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:27 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:27 no-preload-587009 kubelet[5532]: E1210 07:49:27.601767    5532 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:27 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:27 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009: exit status 6 (320.513604ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:49:27.984899 1074041 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-587009" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (512.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (501.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1210 07:41:43.004953  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:43.011375  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:43.022938  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:43.044448  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:43.085983  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:43.167577  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:43.329128  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:43.650886  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:44.293154  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:45.574663  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:48.136119  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:53.258288  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:42:03.500627  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:42:23.982548  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:42:35.782554  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:43:04.944314  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:43:17.494376  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:16.546280  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:16.552675  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:16.564067  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:16.585431  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:16.626900  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:16.708422  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:16.870034  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:17.191792  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:17.833716  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:19.115161  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:21.678119  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:26.799457  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:26.865885  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:37.041787  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:57.523203  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:45:07.322254  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:45:14.424383  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:45:24.250979  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:45:38.486138  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:46:43.011180  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:47:00.411360  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:47:10.710664  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:47:35.782620  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:49:16.546341  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.355945395s)

                                                
                                                
-- stdout --
	* [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:41:22.419570 1064794 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:41:22.419770 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421060 1064794 out.go:374] Setting ErrFile to fd 2...
	I1210 07:41:22.421091 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421504 1064794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:41:22.422065 1064794 out.go:368] Setting JSON to false
	I1210 07:41:22.423214 1064794 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23007,"bootTime":1765329476,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:41:22.423323 1064794 start.go:143] virtualization:  
	I1210 07:41:22.427718 1064794 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:41:22.431355 1064794 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:41:22.431601 1064794 notify.go:221] Checking for updates...
	I1210 07:41:22.438163 1064794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:41:22.441491 1064794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:41:22.444751 1064794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:41:22.447902 1064794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:41:22.451150 1064794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:41:22.454892 1064794 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:22.454990 1064794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:41:22.505215 1064794 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:41:22.505348 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.599796 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.587493789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.599897 1064794 docker.go:319] overlay module found
	I1210 07:41:22.603335 1064794 out.go:179] * Using the docker driver based on user configuration
	I1210 07:41:22.606318 1064794 start.go:309] selected driver: docker
	I1210 07:41:22.606341 1064794 start.go:927] validating driver "docker" against <nil>
	I1210 07:41:22.606356 1064794 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:41:22.607143 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.691557 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.681931889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.691722 1064794 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 07:41:22.691759 1064794 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 07:41:22.691991 1064794 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:41:22.702590 1064794 out.go:179] * Using Docker driver with root privileges
	I1210 07:41:22.705586 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:22.705658 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:22.705667 1064794 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:41:22.705758 1064794 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:22.709067 1064794 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:41:22.711942 1064794 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:41:22.714970 1064794 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:41:22.717872 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:22.717929 1064794 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:41:22.717943 1064794 cache.go:65] Caching tarball of preloaded images
	I1210 07:41:22.718049 1064794 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:41:22.718059 1064794 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:41:22.718202 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:22.718221 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json: {Name:mk35831d9cdfb4ee294c317ea1250d3c633e2dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:22.718581 1064794 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:41:22.747190 1064794 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:41:22.747214 1064794 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:41:22.747228 1064794 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:41:22.747259 1064794 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:41:22.747944 1064794 start.go:364] duration metric: took 667.063µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:41:22.747984 1064794 start.go:93] Provisioning new machine with config: &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:41:22.748068 1064794 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:41:22.751504 1064794 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:41:22.751752 1064794 start.go:159] libmachine.API.Create for "newest-cni-237317" (driver="docker")
	I1210 07:41:22.751794 1064794 client.go:173] LocalClient.Create starting
	I1210 07:41:22.751869 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 07:41:22.751907 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.751924 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.751982 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 07:41:22.751999 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.752011 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.752421 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:41:22.771298 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:41:22.771401 1064794 network_create.go:284] running [docker network inspect newest-cni-237317] to gather additional debugging logs...
	I1210 07:41:22.771420 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317
	W1210 07:41:22.796107 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 returned with exit code 1
	I1210 07:41:22.796138 1064794 network_create.go:287] error running [docker network inspect newest-cni-237317]: docker network inspect newest-cni-237317: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-237317 not found
	I1210 07:41:22.796157 1064794 network_create.go:289] output of [docker network inspect newest-cni-237317]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-237317 not found
	
	** /stderr **
	I1210 07:41:22.796260 1064794 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:22.817585 1064794 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 07:41:22.818052 1064794 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 07:41:22.818535 1064794 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 07:41:22.819200 1064794 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189ad90}
	I1210 07:41:22.819255 1064794 network_create.go:124] attempt to create docker network newest-cni-237317 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:41:22.819429 1064794 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-237317 newest-cni-237317
	I1210 07:41:22.888302 1064794 network_create.go:108] docker network newest-cni-237317 192.168.76.0/24 created
	I1210 07:41:22.888339 1064794 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-237317" container
	I1210 07:41:22.888413 1064794 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:41:22.905595 1064794 cli_runner.go:164] Run: docker volume create newest-cni-237317 --label name.minikube.sigs.k8s.io=newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:41:22.928697 1064794 oci.go:103] Successfully created a docker volume newest-cni-237317
	I1210 07:41:22.928792 1064794 cli_runner.go:164] Run: docker run --rm --name newest-cni-237317-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --entrypoint /usr/bin/test -v newest-cni-237317:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:41:23.496836 1064794 oci.go:107] Successfully prepared a docker volume newest-cni-237317
	I1210 07:41:23.496907 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:23.496920 1064794 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:41:23.497004 1064794 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:41:27.695198 1064794 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.198140987s)
	I1210 07:41:27.695236 1064794 kic.go:203] duration metric: took 4.198307373s to extract preloaded images to volume ...
	W1210 07:41:27.695375 1064794 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:41:27.695491 1064794 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:41:27.749812 1064794 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-237317 --name newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-237317 --network newest-cni-237317 --ip 192.168.76.2 --volume newest-cni-237317:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:41:28.033415 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Running}}
	I1210 07:41:28.055793 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.079642 1064794 cli_runner.go:164] Run: docker exec newest-cni-237317 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:41:28.133214 1064794 oci.go:144] the created container "newest-cni-237317" has a running status.
	I1210 07:41:28.133248 1064794 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa...
	I1210 07:41:28.633820 1064794 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:41:28.653829 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.671371 1064794 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:41:28.671397 1064794 kic_runner.go:114] Args: [docker exec --privileged newest-cni-237317 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:41:28.713692 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.729850 1064794 machine.go:94] provisionDockerMachine start ...
	I1210 07:41:28.729960 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:28.748329 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:28.748679 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:28.748697 1064794 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:41:28.749343 1064794 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:41:31.886152 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:31.886178 1064794 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:41:31.886283 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:31.903879 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:31.904204 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:31.904222 1064794 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:41:32.048555 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:32.048637 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.067055 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:32.067377 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:32.067401 1064794 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:41:32.202608 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:41:32.202637 1064794 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:41:32.202667 1064794 ubuntu.go:190] setting up certificates
	I1210 07:41:32.202678 1064794 provision.go:84] configureAuth start
	I1210 07:41:32.202744 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.219337 1064794 provision.go:143] copyHostCerts
	I1210 07:41:32.219404 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:41:32.219420 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:41:32.219497 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:41:32.219602 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:41:32.219616 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:41:32.219646 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:41:32.219709 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:41:32.219718 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:41:32.219745 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:41:32.219807 1064794 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:41:32.533791 1064794 provision.go:177] copyRemoteCerts
	I1210 07:41:32.533865 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:41:32.533934 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.551601 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.650073 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:41:32.667141 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:41:32.684669 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:41:32.702081 1064794 provision.go:87] duration metric: took 499.382435ms to configureAuth
	I1210 07:41:32.702111 1064794 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:41:32.702312 1064794 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:32.702326 1064794 machine.go:97] duration metric: took 3.972452975s to provisionDockerMachine
	I1210 07:41:32.702334 1064794 client.go:176] duration metric: took 9.950533371s to LocalClient.Create
	I1210 07:41:32.702347 1064794 start.go:167] duration metric: took 9.950596741s to libmachine.API.Create "newest-cni-237317"
	I1210 07:41:32.702357 1064794 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:41:32.702367 1064794 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:41:32.702426 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:41:32.702514 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.718852 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.814355 1064794 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:41:32.817769 1064794 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:41:32.817798 1064794 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:41:32.817811 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:41:32.817871 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:41:32.817953 1064794 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:41:32.818081 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:41:32.825310 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:32.842517 1064794 start.go:296] duration metric: took 140.145403ms for postStartSetup
	I1210 07:41:32.842887 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.859215 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:32.859502 1064794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:41:32.859553 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.875883 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.967611 1064794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:41:32.972359 1064794 start.go:128] duration metric: took 10.224272788s to createHost
	I1210 07:41:32.972384 1064794 start.go:83] releasing machines lock for "newest-cni-237317", held for 10.224421419s
	I1210 07:41:32.972457 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.990273 1064794 ssh_runner.go:195] Run: cat /version.json
	I1210 07:41:32.990351 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.990655 1064794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:41:32.990729 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:33.013202 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.031539 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.114754 1064794 ssh_runner.go:195] Run: systemctl --version
	I1210 07:41:33.211079 1064794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:41:33.215428 1064794 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:41:33.215545 1064794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:41:33.242581 1064794 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:41:33.242622 1064794 start.go:496] detecting cgroup driver to use...
	I1210 07:41:33.242657 1064794 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:41:33.242740 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:41:33.257818 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:41:33.270562 1064794 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:41:33.270659 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:41:33.288766 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:41:33.307284 1064794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:41:33.417555 1064794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:41:33.559224 1064794 docker.go:234] disabling docker service ...
	I1210 07:41:33.559382 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:41:33.583026 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:41:33.596320 1064794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:41:33.714101 1064794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:41:33.838575 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:41:33.853369 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:41:33.868162 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:41:33.876869 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:41:33.885636 1064794 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:41:33.885711 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:41:33.894404 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.903504 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:41:33.912288 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.920951 1064794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:41:33.929214 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:41:33.938205 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:41:33.947047 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:41:33.955864 1064794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:41:33.963242 1064794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:41:33.970548 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.113023 1064794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:41:34.252751 1064794 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:41:34.252855 1064794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:41:34.256875 1064794 start.go:564] Will wait 60s for crictl version
	I1210 07:41:34.256993 1064794 ssh_runner.go:195] Run: which crictl
	I1210 07:41:34.260563 1064794 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:41:34.285437 1064794 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:41:34.285530 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.307510 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.335239 1064794 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:41:34.338330 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:34.356185 1064794 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:41:34.360231 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.373151 1064794 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:41:34.376063 1064794 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:41:34.376220 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:34.376306 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.404402 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.404424 1064794 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:41:34.404484 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.432485 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.432510 1064794 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:41:34.432518 1064794 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:41:34.432610 1064794 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:41:34.432688 1064794 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:41:34.457473 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:34.457499 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:34.457517 1064794 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:41:34.457543 1064794 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:41:34.457665 1064794 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:41:34.457735 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:41:34.465701 1064794 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:41:34.465807 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:41:34.473755 1064794 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:41:34.486983 1064794 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:41:34.499868 1064794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:41:34.513272 1064794 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:41:34.517130 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.527569 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.663379 1064794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:41:34.680375 1064794 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:41:34.680450 1064794 certs.go:195] generating shared ca certs ...
	I1210 07:41:34.680483 1064794 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.680674 1064794 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:41:34.680764 1064794 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:41:34.680789 1064794 certs.go:257] generating profile certs ...
	I1210 07:41:34.680884 1064794 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:41:34.680928 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt with IP's: []
	I1210 07:41:34.839451 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt ...
	I1210 07:41:34.839486 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt: {Name:mk864b17e4815ee03fc5eadc45f8f3d330d86e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.839718 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key ...
	I1210 07:41:34.839736 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key: {Name:mkac75ec3f8c520b4be98288202003aea88a7881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.840557 1064794 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:41:34.840584 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 07:41:34.941668 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f ...
	I1210 07:41:34.941702 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f: {Name:mkd71b8623c8311dc88c663a4045d0b1945deec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941880 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f ...
	I1210 07:41:34.941896 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f: {Name:mk237d8326178abb6dfc7e4dd919116ec45ea9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941986 1064794 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt
	I1210 07:41:34.942080 1064794 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key
	I1210 07:41:34.942146 1064794 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:41:34.942168 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt with IP's: []
	I1210 07:41:35.425873 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt ...
	I1210 07:41:35.425915 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt: {Name:mk51f419728d59ba7ab729d028e45d36640d0231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426770 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key ...
	I1210 07:41:35.426789 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key: {Name:mkb1e5352ebb3c5d51e6e8aed5c36263957e6d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426995 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:41:35.427044 1064794 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:41:35.427053 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:41:35.427086 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:41:35.427120 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:41:35.427152 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:41:35.427211 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:35.427806 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:41:35.447027 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:41:35.465831 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:41:35.484250 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:41:35.503105 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:41:35.521550 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:41:35.540269 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:41:35.563542 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:41:35.585738 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:41:35.609714 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:41:35.628843 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:41:35.647443 1064794 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:41:35.660385 1064794 ssh_runner.go:195] Run: openssl version
	I1210 07:41:35.666967 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.674504 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:41:35.682195 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.685889 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.686015 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.727961 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:41:35.735327 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 07:41:35.742756 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.750368 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:41:35.757661 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761188 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761251 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.802868 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.810435 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.817738 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.825308 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:41:35.832635 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836310 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836372 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.877289 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:41:35.884982 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:41:35.892614 1064794 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:41:35.896331 1064794 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:41:35.896383 1064794 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:35.896476 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:41:35.896540 1064794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:41:35.926036 1064794 cri.go:89] found id: ""
	I1210 07:41:35.926112 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:41:35.934414 1064794 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:41:35.942276 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:41:35.942375 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:41:35.950211 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:41:35.950233 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:41:35.950309 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:41:35.957845 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:41:35.957962 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:41:35.966039 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:41:35.973562 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:41:35.973662 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:41:35.980914 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.988697 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:41:35.988772 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.996172 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:41:36.005049 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:41:36.005181 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:41:36.014173 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:41:36.062406 1064794 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:41:36.062713 1064794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:41:36.147250 1064794 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:41:36.147383 1064794 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:41:36.147459 1064794 kubeadm.go:319] OS: Linux
	I1210 07:41:36.147537 1064794 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:41:36.147617 1064794 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:41:36.147688 1064794 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:41:36.147759 1064794 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:41:36.147834 1064794 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:41:36.147902 1064794 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:41:36.147977 1064794 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:41:36.148046 1064794 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:41:36.148124 1064794 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:41:36.223682 1064794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:41:36.223913 1064794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:41:36.224078 1064794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:41:36.230559 1064794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:41:36.237069 1064794 out.go:252]   - Generating certificates and keys ...
	I1210 07:41:36.237182 1064794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:41:36.237257 1064794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:41:36.476610 1064794 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:41:36.561778 1064794 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:41:36.854281 1064794 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:41:37.263690 1064794 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:41:37.370103 1064794 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:41:37.370484 1064794 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:37.933573 1064794 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:41:37.934013 1064794 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:38.192710 1064794 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:41:38.352643 1064794 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:41:38.587081 1064794 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:41:38.587306 1064794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:41:38.909718 1064794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:41:39.048089 1064794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:39.097056 1064794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:39.169471 1064794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:39.365961 1064794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:39.366635 1064794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:39.369209 1064794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:39.372794 1064794 out.go:252]   - Booting up control plane ...
	I1210 07:41:39.372894 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:39.372969 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:39.374027 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:39.390260 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:39.390694 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:39.397555 1064794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:39.397883 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:39.397929 1064794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:39.536450 1064794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:39.536565 1064794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:45:39.537616 1064794 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001129575s
	I1210 07:45:39.537650 1064794 kubeadm.go:319] 
	I1210 07:45:39.537709 1064794 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:39.537747 1064794 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:39.537857 1064794 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:39.537866 1064794 kubeadm.go:319] 
	I1210 07:45:39.537971 1064794 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:39.538008 1064794 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:39.538043 1064794 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:39.538052 1064794 kubeadm.go:319] 
	I1210 07:45:39.542860 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:39.543622 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:39.543819 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.544243 1064794 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:39.544260 1064794 kubeadm.go:319] 
	I1210 07:45:39.544379 1064794 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:39.544512 1064794 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129575s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129575s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:39.544665 1064794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:40.003717 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:40.026427 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:40.026565 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:40.036588 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:40.036615 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:40.036678 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:40.045938 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:40.046015 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:40.054590 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:40.063126 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:40.063204 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:40.071408 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.079679 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:40.079771 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.088102 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:40.097134 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:40.097216 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:40.105436 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:40.222290 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:40.222807 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:40.298915 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:49:42.193287 1064794 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:49:42.193319 1064794 kubeadm.go:319] 
	I1210 07:49:42.193391 1064794 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:49:42.203786 1064794 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:49:42.203866 1064794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:49:42.203970 1064794 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:49:42.204031 1064794 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:49:42.204076 1064794 kubeadm.go:319] OS: Linux
	I1210 07:49:42.204124 1064794 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:49:42.204177 1064794 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:49:42.204229 1064794 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:49:42.204282 1064794 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:49:42.204335 1064794 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:49:42.204389 1064794 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:49:42.204441 1064794 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:49:42.204493 1064794 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:49:42.204543 1064794 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:49:42.204619 1064794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:49:42.204719 1064794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:49:42.204814 1064794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:49:42.204881 1064794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:49:42.208050 1064794 out.go:252]   - Generating certificates and keys ...
	I1210 07:49:42.208163 1064794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:49:42.208281 1064794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:49:42.208377 1064794 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:49:42.208439 1064794 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:49:42.208528 1064794 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:49:42.208589 1064794 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:49:42.208676 1064794 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:49:42.208750 1064794 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:49:42.208862 1064794 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:49:42.208970 1064794 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:49:42.209024 1064794 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:49:42.209111 1064794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:49:42.209168 1064794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:49:42.209240 1064794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:49:42.209310 1064794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:49:42.209381 1064794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:49:42.209443 1064794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:49:42.209538 1064794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:49:42.209611 1064794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:49:42.212530 1064794 out.go:252]   - Booting up control plane ...
	I1210 07:49:42.212677 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:49:42.212801 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:49:42.212895 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:49:42.213029 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:49:42.213133 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:49:42.213240 1064794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:49:42.213324 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:49:42.213364 1064794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:49:42.213496 1064794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:49:42.213644 1064794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:49:42.213727 1064794 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061258s
	I1210 07:49:42.213738 1064794 kubeadm.go:319] 
	I1210 07:49:42.213804 1064794 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:49:42.213856 1064794 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:49:42.213977 1064794 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:49:42.213986 1064794 kubeadm.go:319] 
	I1210 07:49:42.214091 1064794 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:49:42.214141 1064794 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:49:42.214197 1064794 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:49:42.214296 1064794 kubeadm.go:403] duration metric: took 8m6.317915618s to StartCluster
	I1210 07:49:42.214312 1064794 kubeadm.go:319] 
	I1210 07:49:42.214353 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:49:42.214424 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:49:42.249563 1064794 cri.go:89] found id: ""
	I1210 07:49:42.249601 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.249610 1064794 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:49:42.249616 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:49:42.249684 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:49:42.277505 1064794 cri.go:89] found id: ""
	I1210 07:49:42.277532 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.277542 1064794 logs.go:284] No container was found matching "etcd"
	I1210 07:49:42.277549 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:49:42.277621 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:49:42.315271 1064794 cri.go:89] found id: ""
	I1210 07:49:42.315293 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.315301 1064794 logs.go:284] No container was found matching "coredns"
	I1210 07:49:42.315308 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:49:42.315372 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:49:42.345026 1064794 cri.go:89] found id: ""
	I1210 07:49:42.345048 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.345059 1064794 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:49:42.345066 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:49:42.345129 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:49:42.373644 1064794 cri.go:89] found id: ""
	I1210 07:49:42.373666 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.373675 1064794 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:49:42.373683 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:49:42.373745 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:49:42.400575 1064794 cri.go:89] found id: ""
	I1210 07:49:42.400601 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.400611 1064794 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:49:42.400617 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:49:42.400696 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:49:42.427038 1064794 cri.go:89] found id: ""
	I1210 07:49:42.427115 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.427139 1064794 logs.go:284] No container was found matching "kindnet"
	I1210 07:49:42.427159 1064794 logs.go:123] Gathering logs for containerd ...
	I1210 07:49:42.427171 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:49:42.467853 1064794 logs.go:123] Gathering logs for container status ...
	I1210 07:49:42.467891 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:49:42.498107 1064794 logs.go:123] Gathering logs for kubelet ...
	I1210 07:49:42.498136 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:49:42.557296 1064794 logs.go:123] Gathering logs for dmesg ...
	I1210 07:49:42.557339 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:49:42.581299 1064794 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:49:42.581326 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:49:42.659525 1064794 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:42.650006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651835    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.653583    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.654334    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:49:42.650006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651835    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.653583    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.654334    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:49:42.659549 1064794 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:49:42.659589 1064794 out.go:285] * 
	* 
	W1210 07:49:42.659647 1064794 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:42.659666 1064794 out.go:285] * 
	* 
	W1210 07:49:42.661902 1064794 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:49:42.667670 1064794 out.go:203] 
	W1210 07:49:42.671449 1064794 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:42.671502 1064794 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:49:42.671524 1064794 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:49:42.674621 1064794 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-237317
helpers_test.go:244: (dbg) docker inspect newest-cni-237317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	        "Created": "2025-12-10T07:41:27.764165056Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1065238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:41:27.828515523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hosts",
	        "LogPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d-json.log",
	        "Name": "/newest-cni-237317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-237317:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-237317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	                "LowerDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-237317",
	                "Source": "/var/lib/docker/volumes/newest-cni-237317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-237317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-237317",
	                "name.minikube.sigs.k8s.io": "newest-cni-237317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "082222785b25cb507d74041ac4c00d1d74bffe5ab668e3fe904c3260bea97985",
	            "SandboxKey": "/var/run/docker/netns/082222785b25",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33839"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-237317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:54:e3:f6:e2:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8181aebce826300f2c9eb8f48208470a68f1816a212863fa9c220fbbaa29953b",
	                    "EndpointID": "bccbc4d36a210938307e473b6bf375481b7f47c4af07021cfaeeb28874de79dc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-237317",
	                        "a3bfe8c2955a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317: exit status 6 (341.994876ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:49:43.075979 1075132 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-237317" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-237317 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p cert-expiration-611923 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-611923       │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:38 UTC │
	│ delete  │ -p cert-expiration-611923                                                                                                                                                                                                                                  │ cert-expiration-611923       │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:38 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p default-k8s-diff-port-444518 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ start   │ -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p embed-certs-254586 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-254586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:41:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:41:22.419570 1064794 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:41:22.419770 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421060 1064794 out.go:374] Setting ErrFile to fd 2...
	I1210 07:41:22.421091 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421504 1064794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:41:22.422065 1064794 out.go:368] Setting JSON to false
	I1210 07:41:22.423214 1064794 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23007,"bootTime":1765329476,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:41:22.423323 1064794 start.go:143] virtualization:  
	I1210 07:41:22.427718 1064794 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:41:22.431355 1064794 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:41:22.431601 1064794 notify.go:221] Checking for updates...
	I1210 07:41:22.438163 1064794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:41:22.441491 1064794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:41:22.444751 1064794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:41:22.447902 1064794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:41:22.451150 1064794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:41:22.454892 1064794 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:22.454990 1064794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:41:22.505215 1064794 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:41:22.505348 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.599796 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.587493789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.599897 1064794 docker.go:319] overlay module found
	I1210 07:41:22.603335 1064794 out.go:179] * Using the docker driver based on user configuration
	I1210 07:41:22.606318 1064794 start.go:309] selected driver: docker
	I1210 07:41:22.606341 1064794 start.go:927] validating driver "docker" against <nil>
	I1210 07:41:22.606356 1064794 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:41:22.607143 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.691557 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.681931889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.691722 1064794 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 07:41:22.691759 1064794 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 07:41:22.691991 1064794 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:41:22.702590 1064794 out.go:179] * Using Docker driver with root privileges
	I1210 07:41:22.705586 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:22.705658 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:22.705667 1064794 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:41:22.705758 1064794 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:22.709067 1064794 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:41:22.711942 1064794 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:41:22.714970 1064794 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:41:22.717872 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:22.717929 1064794 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:41:22.717943 1064794 cache.go:65] Caching tarball of preloaded images
	I1210 07:41:22.718049 1064794 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:41:22.718059 1064794 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:41:22.718202 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:22.718221 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json: {Name:mk35831d9cdfb4ee294c317ea1250d3c633e2dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:22.718581 1064794 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:41:22.747190 1064794 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:41:22.747214 1064794 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:41:22.747228 1064794 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:41:22.747259 1064794 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:41:22.747944 1064794 start.go:364] duration metric: took 667.063µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:41:22.747984 1064794 start.go:93] Provisioning new machine with config: &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:41:22.748068 1064794 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:41:21.330818 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:21.932613 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:22.466827 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:22.874611 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:22.875806 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:22.880368 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:22.884509 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:41:22.884617 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:22.884696 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:22.885635 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:22.908007 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:22.908118 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:22.919248 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:22.924362 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:22.924418 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:23.105604 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:23.105729 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:41:22.751504 1064794 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:41:22.751752 1064794 start.go:159] libmachine.API.Create for "newest-cni-237317" (driver="docker")
	I1210 07:41:22.751794 1064794 client.go:173] LocalClient.Create starting
	I1210 07:41:22.751869 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 07:41:22.751907 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.751924 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.751982 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 07:41:22.751999 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.752011 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.752421 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:41:22.771298 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:41:22.771401 1064794 network_create.go:284] running [docker network inspect newest-cni-237317] to gather additional debugging logs...
	I1210 07:41:22.771420 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317
	W1210 07:41:22.796107 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 returned with exit code 1
	I1210 07:41:22.796138 1064794 network_create.go:287] error running [docker network inspect newest-cni-237317]: docker network inspect newest-cni-237317: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-237317 not found
	I1210 07:41:22.796157 1064794 network_create.go:289] output of [docker network inspect newest-cni-237317]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-237317 not found
	
	** /stderr **
	I1210 07:41:22.796260 1064794 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:22.817585 1064794 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 07:41:22.818052 1064794 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 07:41:22.818535 1064794 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 07:41:22.819200 1064794 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189ad90}
	I1210 07:41:22.819255 1064794 network_create.go:124] attempt to create docker network newest-cni-237317 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:41:22.819429 1064794 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-237317 newest-cni-237317
	I1210 07:41:22.888302 1064794 network_create.go:108] docker network newest-cni-237317 192.168.76.0/24 created
	I1210 07:41:22.888339 1064794 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-237317" container
	I1210 07:41:22.888413 1064794 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:41:22.905595 1064794 cli_runner.go:164] Run: docker volume create newest-cni-237317 --label name.minikube.sigs.k8s.io=newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:41:22.928697 1064794 oci.go:103] Successfully created a docker volume newest-cni-237317
	I1210 07:41:22.928792 1064794 cli_runner.go:164] Run: docker run --rm --name newest-cni-237317-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --entrypoint /usr/bin/test -v newest-cni-237317:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:41:23.496836 1064794 oci.go:107] Successfully prepared a docker volume newest-cni-237317
	I1210 07:41:23.496907 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:23.496920 1064794 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:41:23.497004 1064794 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:41:27.695198 1064794 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.198140987s)
	I1210 07:41:27.695236 1064794 kic.go:203] duration metric: took 4.198307373s to extract preloaded images to volume ...
	W1210 07:41:27.695375 1064794 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:41:27.695491 1064794 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:41:27.749812 1064794 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-237317 --name newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-237317 --network newest-cni-237317 --ip 192.168.76.2 --volume newest-cni-237317:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:41:28.033415 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Running}}
	I1210 07:41:28.055793 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.079642 1064794 cli_runner.go:164] Run: docker exec newest-cni-237317 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:41:28.133214 1064794 oci.go:144] the created container "newest-cni-237317" has a running status.
	I1210 07:41:28.133248 1064794 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa...
	I1210 07:41:28.633820 1064794 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:41:28.653829 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.671371 1064794 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:41:28.671397 1064794 kic_runner.go:114] Args: [docker exec --privileged newest-cni-237317 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:41:28.713692 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.729850 1064794 machine.go:94] provisionDockerMachine start ...
	I1210 07:41:28.729960 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:28.748329 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:28.748679 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:28.748697 1064794 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:41:28.749343 1064794 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:41:31.886152 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:31.886178 1064794 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:41:31.886283 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:31.903879 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:31.904204 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:31.904222 1064794 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:41:32.048555 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:32.048637 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.067055 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:32.067377 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:32.067401 1064794 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:41:32.202608 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:41:32.202637 1064794 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:41:32.202667 1064794 ubuntu.go:190] setting up certificates
	I1210 07:41:32.202678 1064794 provision.go:84] configureAuth start
	I1210 07:41:32.202744 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.219337 1064794 provision.go:143] copyHostCerts
	I1210 07:41:32.219404 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:41:32.219420 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:41:32.219497 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:41:32.219602 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:41:32.219616 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:41:32.219646 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:41:32.219709 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:41:32.219718 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:41:32.219745 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:41:32.219807 1064794 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:41:32.533791 1064794 provision.go:177] copyRemoteCerts
	I1210 07:41:32.533865 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:41:32.533934 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.551601 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.650073 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:41:32.667141 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:41:32.684669 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:41:32.702081 1064794 provision.go:87] duration metric: took 499.382435ms to configureAuth
	I1210 07:41:32.702111 1064794 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:41:32.702312 1064794 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:32.702326 1064794 machine.go:97] duration metric: took 3.972452975s to provisionDockerMachine
	I1210 07:41:32.702334 1064794 client.go:176] duration metric: took 9.950533371s to LocalClient.Create
	I1210 07:41:32.702347 1064794 start.go:167] duration metric: took 9.950596741s to libmachine.API.Create "newest-cni-237317"
	I1210 07:41:32.702357 1064794 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:41:32.702367 1064794 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:41:32.702426 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:41:32.702514 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.718852 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.814355 1064794 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:41:32.817769 1064794 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:41:32.817798 1064794 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:41:32.817811 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:41:32.817871 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:41:32.817953 1064794 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:41:32.818081 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:41:32.825310 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:32.842517 1064794 start.go:296] duration metric: took 140.145403ms for postStartSetup
	I1210 07:41:32.842887 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.859215 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:32.859502 1064794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:41:32.859553 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.875883 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.967611 1064794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:41:32.972359 1064794 start.go:128] duration metric: took 10.224272788s to createHost
	I1210 07:41:32.972384 1064794 start.go:83] releasing machines lock for "newest-cni-237317", held for 10.224421419s
	I1210 07:41:32.972457 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.990273 1064794 ssh_runner.go:195] Run: cat /version.json
	I1210 07:41:32.990351 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.990655 1064794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:41:32.990729 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:33.013202 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.031539 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.114754 1064794 ssh_runner.go:195] Run: systemctl --version
	I1210 07:41:33.211079 1064794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:41:33.215428 1064794 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:41:33.215545 1064794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:41:33.242581 1064794 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:41:33.242622 1064794 start.go:496] detecting cgroup driver to use...
	I1210 07:41:33.242657 1064794 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:41:33.242740 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:41:33.257818 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:41:33.270562 1064794 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:41:33.270659 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:41:33.288766 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:41:33.307284 1064794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:41:33.417555 1064794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:41:33.559224 1064794 docker.go:234] disabling docker service ...
	I1210 07:41:33.559382 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:41:33.583026 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:41:33.596320 1064794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:41:33.714101 1064794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:41:33.838575 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:41:33.853369 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:41:33.868162 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:41:33.876869 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:41:33.885636 1064794 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:41:33.885711 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:41:33.894404 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.903504 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:41:33.912288 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.920951 1064794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:41:33.929214 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:41:33.938205 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:41:33.947047 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:41:33.955864 1064794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:41:33.963242 1064794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:41:33.970548 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.113023 1064794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:41:34.252751 1064794 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:41:34.252855 1064794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:41:34.256875 1064794 start.go:564] Will wait 60s for crictl version
	I1210 07:41:34.256993 1064794 ssh_runner.go:195] Run: which crictl
	I1210 07:41:34.260563 1064794 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:41:34.285437 1064794 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:41:34.285530 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.307510 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.335239 1064794 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:41:34.338330 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:34.356185 1064794 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:41:34.360231 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.373151 1064794 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:41:34.376063 1064794 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:41:34.376220 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:34.376306 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.404402 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.404424 1064794 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:41:34.404484 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.432485 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.432510 1064794 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:41:34.432518 1064794 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:41:34.432610 1064794 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:41:34.432688 1064794 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:41:34.457473 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:34.457499 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:34.457517 1064794 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:41:34.457543 1064794 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:41:34.457665 1064794 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:41:34.457735 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:41:34.465701 1064794 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:41:34.465807 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:41:34.473755 1064794 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:41:34.486983 1064794 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:41:34.499868 1064794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:41:34.513272 1064794 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:41:34.517130 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.527569 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.663379 1064794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:41:34.680375 1064794 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:41:34.680450 1064794 certs.go:195] generating shared ca certs ...
	I1210 07:41:34.680483 1064794 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.680674 1064794 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:41:34.680764 1064794 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:41:34.680789 1064794 certs.go:257] generating profile certs ...
	I1210 07:41:34.680884 1064794 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:41:34.680928 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt with IP's: []
	I1210 07:41:34.839451 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt ...
	I1210 07:41:34.839486 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt: {Name:mk864b17e4815ee03fc5eadc45f8f3d330d86e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.839718 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key ...
	I1210 07:41:34.839736 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key: {Name:mkac75ec3f8c520b4be98288202003aea88a7881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.840557 1064794 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:41:34.840584 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 07:41:34.941668 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f ...
	I1210 07:41:34.941702 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f: {Name:mkd71b8623c8311dc88c663a4045d0b1945deec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941880 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f ...
	I1210 07:41:34.941896 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f: {Name:mk237d8326178abb6dfc7e4dd919116ec45ea9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941986 1064794 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt
	I1210 07:41:34.942080 1064794 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key
	I1210 07:41:34.942146 1064794 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:41:34.942168 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt with IP's: []
	I1210 07:41:35.425873 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt ...
	I1210 07:41:35.425915 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt: {Name:mk51f419728d59ba7ab729d028e45d36640d0231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426770 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key ...
	I1210 07:41:35.426789 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key: {Name:mkb1e5352ebb3c5d51e6e8aed5c36263957e6d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426995 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:41:35.427044 1064794 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:41:35.427053 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:41:35.427086 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:41:35.427120 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:41:35.427152 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:41:35.427211 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:35.427806 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:41:35.447027 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:41:35.465831 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:41:35.484250 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:41:35.503105 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:41:35.521550 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:41:35.540269 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:41:35.563542 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:41:35.585738 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:41:35.609714 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:41:35.628843 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:41:35.647443 1064794 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:41:35.660385 1064794 ssh_runner.go:195] Run: openssl version
	I1210 07:41:35.666967 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.674504 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:41:35.682195 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.685889 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.686015 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.727961 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:41:35.735327 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 07:41:35.742756 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.750368 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:41:35.757661 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761188 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761251 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.802868 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.810435 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.817738 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.825308 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:41:35.832635 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836310 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836372 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.877289 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:41:35.884982 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:41:35.892614 1064794 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:41:35.896331 1064794 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:41:35.896383 1064794 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:35.896476 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:41:35.896540 1064794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:41:35.926036 1064794 cri.go:89] found id: ""
	I1210 07:41:35.926112 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:41:35.934414 1064794 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:41:35.942276 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:41:35.942375 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:41:35.950211 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:41:35.950233 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:41:35.950309 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:41:35.957845 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:41:35.957962 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:41:35.966039 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:41:35.973562 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:41:35.973662 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:41:35.980914 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.988697 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:41:35.988772 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.996172 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:41:36.005049 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:41:36.005181 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:41:36.014173 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:41:36.062406 1064794 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:41:36.062713 1064794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:41:36.147250 1064794 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:41:36.147383 1064794 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:41:36.147459 1064794 kubeadm.go:319] OS: Linux
	I1210 07:41:36.147537 1064794 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:41:36.147617 1064794 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:41:36.147688 1064794 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:41:36.147759 1064794 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:41:36.147834 1064794 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:41:36.147902 1064794 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:41:36.147977 1064794 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:41:36.148046 1064794 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:41:36.148124 1064794 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:41:36.223682 1064794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:41:36.223913 1064794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:41:36.224078 1064794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:41:36.230559 1064794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:41:36.237069 1064794 out.go:252]   - Generating certificates and keys ...
	I1210 07:41:36.237182 1064794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:41:36.237257 1064794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:41:36.476610 1064794 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:41:36.561778 1064794 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:41:36.854281 1064794 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:41:37.263690 1064794 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:41:37.370103 1064794 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:41:37.370484 1064794 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:37.933573 1064794 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:41:37.934013 1064794 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:38.192710 1064794 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:41:38.352643 1064794 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:41:38.587081 1064794 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:41:38.587306 1064794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:41:38.909718 1064794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:41:39.048089 1064794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:39.097056 1064794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:39.169471 1064794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:39.365961 1064794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:39.366635 1064794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:39.369209 1064794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:39.372794 1064794 out.go:252]   - Booting up control plane ...
	I1210 07:41:39.372894 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:39.372969 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:39.374027 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:39.390260 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:39.390694 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:39.397555 1064794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:39.397883 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:39.397929 1064794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:39.536450 1064794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:39.536565 1064794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:45:23.105552 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000403225s
	I1210 07:45:23.105596 1061272 kubeadm.go:319] 
	I1210 07:45:23.105659 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:23.105695 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:23.105810 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:23.105817 1061272 kubeadm.go:319] 
	I1210 07:45:23.105931 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:23.105968 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:23.106003 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:23.106008 1061272 kubeadm.go:319] 
	I1210 07:45:23.110089 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.110529 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.110638 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:23.110873 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:23.110878 1061272 kubeadm.go:319] 
	I1210 07:45:23.110946 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:23.111048 1061272 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000403225s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:23.111129 1061272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:23.528980 1061272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:23.543064 1061272 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:23.543133 1061272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:23.552680 1061272 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:23.552702 1061272 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:23.552757 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:23.561132 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:23.561196 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:23.569220 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:23.577552 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:23.577617 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:23.585736 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.594195 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:23.594261 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.602367 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:23.610802 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:23.610868 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:23.618934 1061272 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:23.738244 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.738666 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.820302 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.537616 1064794 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001129575s
	I1210 07:45:39.537650 1064794 kubeadm.go:319] 
	I1210 07:45:39.537709 1064794 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:39.537747 1064794 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:39.537857 1064794 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:39.537866 1064794 kubeadm.go:319] 
	I1210 07:45:39.537971 1064794 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:39.538008 1064794 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:39.538043 1064794 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:39.538052 1064794 kubeadm.go:319] 
	I1210 07:45:39.542860 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:39.543622 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:39.543819 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.544243 1064794 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:39.544260 1064794 kubeadm.go:319] 
	I1210 07:45:39.544379 1064794 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:39.544512 1064794 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129575s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:39.544665 1064794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:40.003717 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:40.026427 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:40.026565 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:40.036588 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:40.036615 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:40.036678 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:40.045938 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:40.046015 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:40.054590 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:40.063126 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:40.063204 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:40.071408 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.079679 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:40.079771 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.088102 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:40.097134 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:40.097216 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:40.105436 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:40.222290 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:40.222807 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:40.298915 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:49:26.015309 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:49:26.015352 1061272 kubeadm.go:319] 
	I1210 07:49:26.015478 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:49:26.021506 1061272 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:49:26.021573 1061272 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:49:26.021669 1061272 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:49:26.021735 1061272 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:49:26.021780 1061272 kubeadm.go:319] OS: Linux
	I1210 07:49:26.021833 1061272 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:49:26.021898 1061272 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:49:26.021954 1061272 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:49:26.022012 1061272 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:49:26.022072 1061272 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:49:26.022130 1061272 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:49:26.022183 1061272 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:49:26.022239 1061272 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:49:26.022294 1061272 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:49:26.022377 1061272 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:49:26.022520 1061272 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:49:26.022665 1061272 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:49:26.022797 1061272 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:49:26.025625 1061272 out.go:252]   - Generating certificates and keys ...
	I1210 07:49:26.025738 1061272 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:49:26.025820 1061272 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:49:26.025909 1061272 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:49:26.025981 1061272 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:49:26.026084 1061272 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:49:26.026145 1061272 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:49:26.026218 1061272 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:49:26.026288 1061272 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:49:26.026372 1061272 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:49:26.026456 1061272 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:49:26.026527 1061272 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:49:26.026596 1061272 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:49:26.026658 1061272 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:49:26.026731 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:49:26.026814 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:49:26.026910 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:49:26.027000 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:49:26.027123 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:49:26.027217 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:49:26.032204 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:49:26.032327 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:49:26.032449 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:49:26.032535 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:49:26.032660 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:49:26.032760 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:49:26.032871 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:49:26.032963 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:49:26.033008 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:49:26.033144 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:49:26.033252 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:49:26.033319 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00018658s
	I1210 07:49:26.033356 1061272 kubeadm.go:319] 
	I1210 07:49:26.033430 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:49:26.033471 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:49:26.033578 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:49:26.033591 1061272 kubeadm.go:319] 
	I1210 07:49:26.033695 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:49:26.033732 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:49:26.033765 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:49:26.033838 1061272 kubeadm.go:403] duration metric: took 8m9.047256448s to StartCluster
	I1210 07:49:26.033878 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:49:26.033967 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:49:26.034180 1061272 kubeadm.go:319] 
	I1210 07:49:26.078012 1061272 cri.go:89] found id: ""
	I1210 07:49:26.078053 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.078063 1061272 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:49:26.078088 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:49:26.078174 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:49:26.106609 1061272 cri.go:89] found id: ""
	I1210 07:49:26.106637 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.106653 1061272 logs.go:284] No container was found matching "etcd"
	I1210 07:49:26.106660 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:49:26.106763 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:49:26.132553 1061272 cri.go:89] found id: ""
	I1210 07:49:26.132579 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.132589 1061272 logs.go:284] No container was found matching "coredns"
	I1210 07:49:26.132595 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:49:26.132657 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:49:26.159729 1061272 cri.go:89] found id: ""
	I1210 07:49:26.159779 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.159789 1061272 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:49:26.159797 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:49:26.159864 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:49:26.185308 1061272 cri.go:89] found id: ""
	I1210 07:49:26.185386 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.185409 1061272 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:49:26.185430 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:49:26.185524 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:49:26.210297 1061272 cri.go:89] found id: ""
	I1210 07:49:26.210364 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.210388 1061272 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:49:26.210409 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:49:26.210538 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:49:26.235247 1061272 cri.go:89] found id: ""
	I1210 07:49:26.235320 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.235341 1061272 logs.go:284] No container was found matching "kindnet"
	I1210 07:49:26.235352 1061272 logs.go:123] Gathering logs for kubelet ...
	I1210 07:49:26.235364 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:49:26.292545 1061272 logs.go:123] Gathering logs for dmesg ...
	I1210 07:49:26.292580 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:49:26.309666 1061272 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:49:26.309695 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:49:26.371886 1061272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:49:26.371909 1061272 logs.go:123] Gathering logs for containerd ...
	I1210 07:49:26.371922 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:49:26.414122 1061272 logs.go:123] Gathering logs for container status ...
	I1210 07:49:26.414158 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:49:26.443108 1061272 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:49:26.443165 1061272 out.go:285] * 
	W1210 07:49:26.443224 1061272 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.443242 1061272 out.go:285] * 
	W1210 07:49:26.445452 1061272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:49:26.452172 1061272 out.go:203] 
	W1210 07:49:26.455094 1061272 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.455136 1061272 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:49:26.455159 1061272 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:49:26.458257 1061272 out.go:203] 
	I1210 07:49:42.193287 1064794 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:49:42.193319 1064794 kubeadm.go:319] 
	I1210 07:49:42.193391 1064794 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:49:42.203786 1064794 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:49:42.203866 1064794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:49:42.203970 1064794 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:49:42.204031 1064794 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:49:42.204076 1064794 kubeadm.go:319] OS: Linux
	I1210 07:49:42.204124 1064794 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:49:42.204177 1064794 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:49:42.204229 1064794 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:49:42.204282 1064794 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:49:42.204335 1064794 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:49:42.204389 1064794 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:49:42.204441 1064794 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:49:42.204493 1064794 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:49:42.204543 1064794 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:49:42.204619 1064794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:49:42.204719 1064794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:49:42.204814 1064794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:49:42.204881 1064794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:49:42.208050 1064794 out.go:252]   - Generating certificates and keys ...
	I1210 07:49:42.208163 1064794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:49:42.208281 1064794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:49:42.208377 1064794 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:49:42.208439 1064794 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:49:42.208528 1064794 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:49:42.208589 1064794 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:49:42.208676 1064794 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:49:42.208750 1064794 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:49:42.208862 1064794 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:49:42.208970 1064794 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:49:42.209024 1064794 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:49:42.209111 1064794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:49:42.209168 1064794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:49:42.209240 1064794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:49:42.209310 1064794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:49:42.209381 1064794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:49:42.209443 1064794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:49:42.209538 1064794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:49:42.209611 1064794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:49:42.212530 1064794 out.go:252]   - Booting up control plane ...
	I1210 07:49:42.212677 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:49:42.212801 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:49:42.212895 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:49:42.213029 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:49:42.213133 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:49:42.213240 1064794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:49:42.213324 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:49:42.213364 1064794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:49:42.213496 1064794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:49:42.213644 1064794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:49:42.213727 1064794 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061258s
	I1210 07:49:42.213738 1064794 kubeadm.go:319] 
	I1210 07:49:42.213804 1064794 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:49:42.213856 1064794 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:49:42.213977 1064794 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:49:42.213986 1064794 kubeadm.go:319] 
	I1210 07:49:42.214091 1064794 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:49:42.214141 1064794 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:49:42.214197 1064794 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:49:42.214296 1064794 kubeadm.go:403] duration metric: took 8m6.317915618s to StartCluster
	I1210 07:49:42.214312 1064794 kubeadm.go:319] 
	I1210 07:49:42.214353 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:49:42.214424 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:49:42.249563 1064794 cri.go:89] found id: ""
	I1210 07:49:42.249601 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.249610 1064794 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:49:42.249616 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:49:42.249684 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:49:42.277505 1064794 cri.go:89] found id: ""
	I1210 07:49:42.277532 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.277542 1064794 logs.go:284] No container was found matching "etcd"
	I1210 07:49:42.277549 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:49:42.277621 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:49:42.315271 1064794 cri.go:89] found id: ""
	I1210 07:49:42.315293 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.315301 1064794 logs.go:284] No container was found matching "coredns"
	I1210 07:49:42.315308 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:49:42.315372 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:49:42.345026 1064794 cri.go:89] found id: ""
	I1210 07:49:42.345048 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.345059 1064794 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:49:42.345066 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:49:42.345129 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:49:42.373644 1064794 cri.go:89] found id: ""
	I1210 07:49:42.373666 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.373675 1064794 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:49:42.373683 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:49:42.373745 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:49:42.400575 1064794 cri.go:89] found id: ""
	I1210 07:49:42.400601 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.400611 1064794 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:49:42.400617 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:49:42.400696 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:49:42.427038 1064794 cri.go:89] found id: ""
	I1210 07:49:42.427115 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.427139 1064794 logs.go:284] No container was found matching "kindnet"
	I1210 07:49:42.427159 1064794 logs.go:123] Gathering logs for containerd ...
	I1210 07:49:42.427171 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:49:42.467853 1064794 logs.go:123] Gathering logs for container status ...
	I1210 07:49:42.467891 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:49:42.498107 1064794 logs.go:123] Gathering logs for kubelet ...
	I1210 07:49:42.498136 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:49:42.557296 1064794 logs.go:123] Gathering logs for dmesg ...
	I1210 07:49:42.557339 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:49:42.581299 1064794 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:49:42.581326 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:49:42.659525 1064794 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:42.650006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651835    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.653583    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.654334    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:49:42.650006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651835    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.653583    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.654334    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:49:42.659549 1064794 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:49:42.659589 1064794 out.go:285] * 
	W1210 07:49:42.659647 1064794 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:42.659666 1064794 out.go:285] * 
	W1210 07:49:42.661902 1064794 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:49:42.667670 1064794 out.go:203] 
	W1210 07:49:42.671449 1064794 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:42.671502 1064794 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:49:42.671524 1064794 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:49:42.674621 1064794 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189451530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189523465Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189620993Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189699829Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189762329Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189821054Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189877891Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189950344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.190019227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.190109386Z" level=info msg="Connect containerd service"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.190549139Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.191210130Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.204351589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.204433822Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.205109853Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.205184234Z" level=info msg="Start recovering state"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.248948445Z" level=info msg="Start event monitor"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249142024Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249206008Z" level=info msg="Start streaming server"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249282998Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249338737Z" level=info msg="runtime interface starting up..."
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249390947Z" level=info msg="starting plugins..."
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249453495Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:41:34 newest-cni-237317 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.250564996Z" level=info msg="containerd successfully booted in 0.087383s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:43.685565    4890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:43.686027    4890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:43.687898    4890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:43.688457    4890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:43.690162    4890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:49:43 up  6:31,  0 user,  load average: 0.54, 1.03, 1.70
	Linux newest-cni-237317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:49:40 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:40 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 10 07:49:40 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:40 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:40 newest-cni-237317 kubelet[4692]: E1210 07:49:40.842695    4692 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:40 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:40 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:41 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 07:49:41 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:41 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:41 newest-cni-237317 kubelet[4698]: E1210 07:49:41.599049    4698 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:41 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:41 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:42 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 07:49:42 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:42 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:42 newest-cni-237317 kubelet[4724]: E1210 07:49:42.361162    4724 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:42 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:42 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:42 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 07:49:42 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:42 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:43 newest-cni-237317 kubelet[4804]: E1210 07:49:43.040078    4804 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:43 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:43 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317: exit status 6 (343.053114ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:49:44.217902 1075353 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-237317" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-237317" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (501.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-587009 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-587009 create -f testdata/busybox.yaml: exit status 1 (59.034219ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-587009" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-587009 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-587009
helpers_test.go:244: (dbg) docker inspect no-preload-587009:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	        "Created": "2025-12-10T07:40:57.013300176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1061581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:40:57.085196071Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hosts",
	        "LogPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf-json.log",
	        "Name": "/no-preload-587009",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-587009:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-587009",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	                "LowerDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-587009",
	                "Source": "/var/lib/docker/volumes/no-preload-587009/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-587009",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-587009",
	                "name.minikube.sigs.k8s.io": "no-preload-587009",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8b83fbfc75ea1d8c820bf3d9633eb7375349335312aed9e093d5e02998fdbe5",
	            "SandboxKey": "/var/run/docker/netns/c8b83fbfc75e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-587009": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:b3:8b:9d:de:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93aee88a5c37e6ba01b74d7794f328193d01cfce9cb66379ab00b3f3e2e73a48",
	                    "EndpointID": "5db24e5622527f7835e680ba82c923c3693544dd67ec75d3b13b6f9a54598147",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-587009",
	                        "59d9b9413fb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009: exit status 6 (330.191903ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:49:28.395134 1074128 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-587009 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:39 UTC │
	│ start   │ -p cert-expiration-611923 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-611923       │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:38 UTC │
	│ delete  │ -p cert-expiration-611923                                                                                                                                                                                                                                  │ cert-expiration-611923       │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:38 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p default-k8s-diff-port-444518 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ start   │ -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p embed-certs-254586 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-254586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:41:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:41:22.419570 1064794 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:41:22.419770 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421060 1064794 out.go:374] Setting ErrFile to fd 2...
	I1210 07:41:22.421091 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421504 1064794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:41:22.422065 1064794 out.go:368] Setting JSON to false
	I1210 07:41:22.423214 1064794 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23007,"bootTime":1765329476,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:41:22.423323 1064794 start.go:143] virtualization:  
	I1210 07:41:22.427718 1064794 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:41:22.431355 1064794 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:41:22.431601 1064794 notify.go:221] Checking for updates...
	I1210 07:41:22.438163 1064794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:41:22.441491 1064794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:41:22.444751 1064794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:41:22.447902 1064794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:41:22.451150 1064794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:41:22.454892 1064794 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:22.454990 1064794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:41:22.505215 1064794 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:41:22.505348 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.599796 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.587493789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.599897 1064794 docker.go:319] overlay module found
	I1210 07:41:22.603335 1064794 out.go:179] * Using the docker driver based on user configuration
	I1210 07:41:22.606318 1064794 start.go:309] selected driver: docker
	I1210 07:41:22.606341 1064794 start.go:927] validating driver "docker" against <nil>
	I1210 07:41:22.606356 1064794 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:41:22.607143 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.691557 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.681931889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.691722 1064794 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 07:41:22.691759 1064794 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 07:41:22.691991 1064794 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:41:22.702590 1064794 out.go:179] * Using Docker driver with root privileges
	I1210 07:41:22.705586 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:22.705658 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:22.705667 1064794 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:41:22.705758 1064794 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:22.709067 1064794 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:41:22.711942 1064794 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:41:22.714970 1064794 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:41:22.717872 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:22.717929 1064794 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:41:22.717943 1064794 cache.go:65] Caching tarball of preloaded images
	I1210 07:41:22.718049 1064794 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:41:22.718059 1064794 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:41:22.718202 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:22.718221 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json: {Name:mk35831d9cdfb4ee294c317ea1250d3c633e2dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:22.718581 1064794 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:41:22.747190 1064794 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:41:22.747214 1064794 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:41:22.747228 1064794 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:41:22.747259 1064794 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:41:22.747944 1064794 start.go:364] duration metric: took 667.063µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:41:22.747984 1064794 start.go:93] Provisioning new machine with config: &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:41:22.748068 1064794 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:41:21.330818 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:21.932613 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:22.466827 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:22.874611 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:22.875806 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:22.880368 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:22.884509 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:41:22.884617 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:22.884696 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:22.885635 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:22.908007 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:22.908118 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:22.919248 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:22.924362 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:22.924418 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:23.105604 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:23.105729 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:41:22.751504 1064794 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:41:22.751752 1064794 start.go:159] libmachine.API.Create for "newest-cni-237317" (driver="docker")
	I1210 07:41:22.751794 1064794 client.go:173] LocalClient.Create starting
	I1210 07:41:22.751869 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 07:41:22.751907 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.751924 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.751982 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 07:41:22.751999 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.752011 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.752421 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:41:22.771298 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:41:22.771401 1064794 network_create.go:284] running [docker network inspect newest-cni-237317] to gather additional debugging logs...
	I1210 07:41:22.771420 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317
	W1210 07:41:22.796107 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 returned with exit code 1
	I1210 07:41:22.796138 1064794 network_create.go:287] error running [docker network inspect newest-cni-237317]: docker network inspect newest-cni-237317: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-237317 not found
	I1210 07:41:22.796157 1064794 network_create.go:289] output of [docker network inspect newest-cni-237317]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-237317 not found
	
	** /stderr **
	I1210 07:41:22.796260 1064794 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:22.817585 1064794 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 07:41:22.818052 1064794 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 07:41:22.818535 1064794 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 07:41:22.819200 1064794 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189ad90}
	I1210 07:41:22.819255 1064794 network_create.go:124] attempt to create docker network newest-cni-237317 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:41:22.819429 1064794 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-237317 newest-cni-237317
	I1210 07:41:22.888302 1064794 network_create.go:108] docker network newest-cni-237317 192.168.76.0/24 created
	I1210 07:41:22.888339 1064794 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-237317" container
	I1210 07:41:22.888413 1064794 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:41:22.905595 1064794 cli_runner.go:164] Run: docker volume create newest-cni-237317 --label name.minikube.sigs.k8s.io=newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:41:22.928697 1064794 oci.go:103] Successfully created a docker volume newest-cni-237317
	I1210 07:41:22.928792 1064794 cli_runner.go:164] Run: docker run --rm --name newest-cni-237317-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --entrypoint /usr/bin/test -v newest-cni-237317:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:41:23.496836 1064794 oci.go:107] Successfully prepared a docker volume newest-cni-237317
	I1210 07:41:23.496907 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:23.496920 1064794 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:41:23.497004 1064794 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:41:27.695198 1064794 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.198140987s)
	I1210 07:41:27.695236 1064794 kic.go:203] duration metric: took 4.198307373s to extract preloaded images to volume ...
	W1210 07:41:27.695375 1064794 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:41:27.695491 1064794 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:41:27.749812 1064794 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-237317 --name newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-237317 --network newest-cni-237317 --ip 192.168.76.2 --volume newest-cni-237317:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:41:28.033415 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Running}}
	I1210 07:41:28.055793 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.079642 1064794 cli_runner.go:164] Run: docker exec newest-cni-237317 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:41:28.133214 1064794 oci.go:144] the created container "newest-cni-237317" has a running status.
	I1210 07:41:28.133248 1064794 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa...
	I1210 07:41:28.633820 1064794 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:41:28.653829 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.671371 1064794 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:41:28.671397 1064794 kic_runner.go:114] Args: [docker exec --privileged newest-cni-237317 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:41:28.713692 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.729850 1064794 machine.go:94] provisionDockerMachine start ...
	I1210 07:41:28.729960 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:28.748329 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:28.748679 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:28.748697 1064794 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:41:28.749343 1064794 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:41:31.886152 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:31.886178 1064794 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:41:31.886283 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:31.903879 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:31.904204 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:31.904222 1064794 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:41:32.048555 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:32.048637 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.067055 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:32.067377 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:32.067401 1064794 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:41:32.202608 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:41:32.202637 1064794 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:41:32.202667 1064794 ubuntu.go:190] setting up certificates
	I1210 07:41:32.202678 1064794 provision.go:84] configureAuth start
	I1210 07:41:32.202744 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.219337 1064794 provision.go:143] copyHostCerts
	I1210 07:41:32.219404 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:41:32.219420 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:41:32.219497 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:41:32.219602 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:41:32.219616 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:41:32.219646 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:41:32.219709 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:41:32.219718 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:41:32.219745 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:41:32.219807 1064794 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:41:32.533791 1064794 provision.go:177] copyRemoteCerts
	I1210 07:41:32.533865 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:41:32.533934 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.551601 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.650073 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:41:32.667141 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:41:32.684669 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:41:32.702081 1064794 provision.go:87] duration metric: took 499.382435ms to configureAuth
	I1210 07:41:32.702111 1064794 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:41:32.702312 1064794 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:32.702326 1064794 machine.go:97] duration metric: took 3.972452975s to provisionDockerMachine
	I1210 07:41:32.702334 1064794 client.go:176] duration metric: took 9.950533371s to LocalClient.Create
	I1210 07:41:32.702347 1064794 start.go:167] duration metric: took 9.950596741s to libmachine.API.Create "newest-cni-237317"
	I1210 07:41:32.702357 1064794 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:41:32.702367 1064794 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:41:32.702426 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:41:32.702514 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.718852 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.814355 1064794 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:41:32.817769 1064794 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:41:32.817798 1064794 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:41:32.817811 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:41:32.817871 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:41:32.817953 1064794 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:41:32.818081 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:41:32.825310 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:32.842517 1064794 start.go:296] duration metric: took 140.145403ms for postStartSetup
	I1210 07:41:32.842887 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.859215 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:32.859502 1064794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:41:32.859553 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.875883 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.967611 1064794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:41:32.972359 1064794 start.go:128] duration metric: took 10.224272788s to createHost
	I1210 07:41:32.972384 1064794 start.go:83] releasing machines lock for "newest-cni-237317", held for 10.224421419s
	I1210 07:41:32.972457 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.990273 1064794 ssh_runner.go:195] Run: cat /version.json
	I1210 07:41:32.990351 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.990655 1064794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:41:32.990729 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:33.013202 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.031539 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.114754 1064794 ssh_runner.go:195] Run: systemctl --version
	I1210 07:41:33.211079 1064794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:41:33.215428 1064794 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:41:33.215545 1064794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:41:33.242581 1064794 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:41:33.242622 1064794 start.go:496] detecting cgroup driver to use...
	I1210 07:41:33.242657 1064794 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:41:33.242740 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:41:33.257818 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:41:33.270562 1064794 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:41:33.270659 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:41:33.288766 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:41:33.307284 1064794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:41:33.417555 1064794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:41:33.559224 1064794 docker.go:234] disabling docker service ...
	I1210 07:41:33.559382 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:41:33.583026 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:41:33.596320 1064794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:41:33.714101 1064794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:41:33.838575 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:41:33.853369 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:41:33.868162 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:41:33.876869 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:41:33.885636 1064794 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:41:33.885711 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:41:33.894404 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.903504 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:41:33.912288 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.920951 1064794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:41:33.929214 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:41:33.938205 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:41:33.947047 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:41:33.955864 1064794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:41:33.963242 1064794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:41:33.970548 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.113023 1064794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:41:34.252751 1064794 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:41:34.252855 1064794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:41:34.256875 1064794 start.go:564] Will wait 60s for crictl version
	I1210 07:41:34.256993 1064794 ssh_runner.go:195] Run: which crictl
	I1210 07:41:34.260563 1064794 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:41:34.285437 1064794 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:41:34.285530 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.307510 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.335239 1064794 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:41:34.338330 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:34.356185 1064794 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:41:34.360231 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.373151 1064794 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:41:34.376063 1064794 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:41:34.376220 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:34.376306 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.404402 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.404424 1064794 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:41:34.404484 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.432485 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.432510 1064794 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:41:34.432518 1064794 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:41:34.432610 1064794 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:41:34.432688 1064794 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:41:34.457473 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:34.457499 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:34.457517 1064794 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:41:34.457543 1064794 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:41:34.457665 1064794 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:41:34.457735 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:41:34.465701 1064794 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:41:34.465807 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:41:34.473755 1064794 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:41:34.486983 1064794 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:41:34.499868 1064794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:41:34.513272 1064794 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:41:34.517130 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.527569 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.663379 1064794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:41:34.680375 1064794 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:41:34.680450 1064794 certs.go:195] generating shared ca certs ...
	I1210 07:41:34.680483 1064794 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.680674 1064794 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:41:34.680764 1064794 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:41:34.680789 1064794 certs.go:257] generating profile certs ...
	I1210 07:41:34.680884 1064794 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:41:34.680928 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt with IP's: []
	I1210 07:41:34.839451 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt ...
	I1210 07:41:34.839486 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt: {Name:mk864b17e4815ee03fc5eadc45f8f3d330d86e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.839718 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key ...
	I1210 07:41:34.839736 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key: {Name:mkac75ec3f8c520b4be98288202003aea88a7881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.840557 1064794 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:41:34.840584 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 07:41:34.941668 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f ...
	I1210 07:41:34.941702 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f: {Name:mkd71b8623c8311dc88c663a4045d0b1945deec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941880 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f ...
	I1210 07:41:34.941896 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f: {Name:mk237d8326178abb6dfc7e4dd919116ec45ea9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941986 1064794 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt
	I1210 07:41:34.942080 1064794 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key
	I1210 07:41:34.942146 1064794 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:41:34.942168 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt with IP's: []
	I1210 07:41:35.425873 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt ...
	I1210 07:41:35.425915 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt: {Name:mk51f419728d59ba7ab729d028e45d36640d0231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426770 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key ...
	I1210 07:41:35.426789 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key: {Name:mkb1e5352ebb3c5d51e6e8aed5c36263957e6d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426995 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:41:35.427044 1064794 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:41:35.427053 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:41:35.427086 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:41:35.427120 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:41:35.427152 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:41:35.427211 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:35.427806 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:41:35.447027 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:41:35.465831 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:41:35.484250 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:41:35.503105 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:41:35.521550 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:41:35.540269 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:41:35.563542 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:41:35.585738 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:41:35.609714 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:41:35.628843 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:41:35.647443 1064794 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:41:35.660385 1064794 ssh_runner.go:195] Run: openssl version
	I1210 07:41:35.666967 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.674504 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:41:35.682195 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.685889 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.686015 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.727961 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:41:35.735327 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 07:41:35.742756 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.750368 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:41:35.757661 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761188 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761251 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.802868 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.810435 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.817738 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.825308 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:41:35.832635 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836310 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836372 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.877289 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:41:35.884982 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:41:35.892614 1064794 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:41:35.896331 1064794 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:41:35.896383 1064794 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:35.896476 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:41:35.896540 1064794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:41:35.926036 1064794 cri.go:89] found id: ""
	I1210 07:41:35.926112 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:41:35.934414 1064794 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:41:35.942276 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:41:35.942375 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:41:35.950211 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:41:35.950233 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:41:35.950309 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:41:35.957845 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:41:35.957962 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:41:35.966039 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:41:35.973562 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:41:35.973662 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:41:35.980914 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.988697 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:41:35.988772 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.996172 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:41:36.005049 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:41:36.005181 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:41:36.014173 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:41:36.062406 1064794 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:41:36.062713 1064794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:41:36.147250 1064794 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:41:36.147383 1064794 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:41:36.147459 1064794 kubeadm.go:319] OS: Linux
	I1210 07:41:36.147537 1064794 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:41:36.147617 1064794 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:41:36.147688 1064794 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:41:36.147759 1064794 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:41:36.147834 1064794 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:41:36.147902 1064794 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:41:36.147977 1064794 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:41:36.148046 1064794 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:41:36.148124 1064794 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:41:36.223682 1064794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:41:36.223913 1064794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:41:36.224078 1064794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:41:36.230559 1064794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:41:36.237069 1064794 out.go:252]   - Generating certificates and keys ...
	I1210 07:41:36.237182 1064794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:41:36.237257 1064794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:41:36.476610 1064794 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:41:36.561778 1064794 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:41:36.854281 1064794 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:41:37.263690 1064794 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:41:37.370103 1064794 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:41:37.370484 1064794 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:37.933573 1064794 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:41:37.934013 1064794 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:38.192710 1064794 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:41:38.352643 1064794 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:41:38.587081 1064794 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:41:38.587306 1064794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:41:38.909718 1064794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:41:39.048089 1064794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:39.097056 1064794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:39.169471 1064794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:39.365961 1064794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:39.366635 1064794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:39.369209 1064794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:39.372794 1064794 out.go:252]   - Booting up control plane ...
	I1210 07:41:39.372894 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:39.372969 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:39.374027 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:39.390260 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:39.390694 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:39.397555 1064794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:39.397883 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:39.397929 1064794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:39.536450 1064794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:39.536565 1064794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:45:23.105552 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000403225s
	I1210 07:45:23.105596 1061272 kubeadm.go:319] 
	I1210 07:45:23.105659 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:23.105695 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:23.105810 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:23.105817 1061272 kubeadm.go:319] 
	I1210 07:45:23.105931 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:23.105968 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:23.106003 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:23.106008 1061272 kubeadm.go:319] 
	I1210 07:45:23.110089 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.110529 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.110638 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:23.110873 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:23.110878 1061272 kubeadm.go:319] 
	I1210 07:45:23.110946 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:23.111048 1061272 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000403225s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:23.111129 1061272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:23.528980 1061272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:23.543064 1061272 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:23.543133 1061272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:23.552680 1061272 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:23.552702 1061272 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:23.552757 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:23.561132 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:23.561196 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:23.569220 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:23.577552 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:23.577617 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:23.585736 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.594195 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:23.594261 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.602367 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:23.610802 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:23.610868 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:23.618934 1061272 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:23.738244 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.738666 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.820302 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.537616 1064794 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001129575s
	I1210 07:45:39.537650 1064794 kubeadm.go:319] 
	I1210 07:45:39.537709 1064794 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:39.537747 1064794 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:39.537857 1064794 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:39.537866 1064794 kubeadm.go:319] 
	I1210 07:45:39.537971 1064794 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:39.538008 1064794 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:39.538043 1064794 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:39.538052 1064794 kubeadm.go:319] 
	I1210 07:45:39.542860 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:39.543622 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:39.543819 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.544243 1064794 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:39.544260 1064794 kubeadm.go:319] 
	I1210 07:45:39.544379 1064794 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:39.544512 1064794 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129575s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:39.544665 1064794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:40.003717 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:40.026427 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:40.026565 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:40.036588 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:40.036615 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:40.036678 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:40.045938 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:40.046015 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:40.054590 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:40.063126 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:40.063204 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:40.071408 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.079679 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:40.079771 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.088102 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:40.097134 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:40.097216 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:40.105436 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:40.222290 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:40.222807 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:40.298915 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:49:26.015309 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:49:26.015352 1061272 kubeadm.go:319] 
	I1210 07:49:26.015478 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:49:26.021506 1061272 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:49:26.021573 1061272 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:49:26.021669 1061272 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:49:26.021735 1061272 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:49:26.021780 1061272 kubeadm.go:319] OS: Linux
	I1210 07:49:26.021833 1061272 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:49:26.021898 1061272 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:49:26.021954 1061272 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:49:26.022012 1061272 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:49:26.022072 1061272 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:49:26.022130 1061272 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:49:26.022183 1061272 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:49:26.022239 1061272 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:49:26.022294 1061272 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:49:26.022377 1061272 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:49:26.022520 1061272 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:49:26.022665 1061272 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:49:26.022797 1061272 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:49:26.025625 1061272 out.go:252]   - Generating certificates and keys ...
	I1210 07:49:26.025738 1061272 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:49:26.025820 1061272 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:49:26.025909 1061272 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:49:26.025981 1061272 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:49:26.026084 1061272 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:49:26.026145 1061272 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:49:26.026218 1061272 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:49:26.026288 1061272 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:49:26.026372 1061272 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:49:26.026456 1061272 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:49:26.026527 1061272 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:49:26.026596 1061272 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:49:26.026658 1061272 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:49:26.026731 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:49:26.026814 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:49:26.026910 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:49:26.027000 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:49:26.027123 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:49:26.027217 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:49:26.032204 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:49:26.032327 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:49:26.032449 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:49:26.032535 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:49:26.032660 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:49:26.032760 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:49:26.032871 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:49:26.032963 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:49:26.033008 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:49:26.033144 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:49:26.033252 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:49:26.033319 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00018658s
	I1210 07:49:26.033356 1061272 kubeadm.go:319] 
	I1210 07:49:26.033430 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:49:26.033471 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:49:26.033578 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:49:26.033591 1061272 kubeadm.go:319] 
	I1210 07:49:26.033695 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:49:26.033732 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:49:26.033765 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:49:26.033838 1061272 kubeadm.go:403] duration metric: took 8m9.047256448s to StartCluster
	I1210 07:49:26.033878 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:49:26.033967 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:49:26.034180 1061272 kubeadm.go:319] 
	I1210 07:49:26.078012 1061272 cri.go:89] found id: ""
	I1210 07:49:26.078053 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.078063 1061272 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:49:26.078088 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:49:26.078174 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:49:26.106609 1061272 cri.go:89] found id: ""
	I1210 07:49:26.106637 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.106653 1061272 logs.go:284] No container was found matching "etcd"
	I1210 07:49:26.106660 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:49:26.106763 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:49:26.132553 1061272 cri.go:89] found id: ""
	I1210 07:49:26.132579 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.132589 1061272 logs.go:284] No container was found matching "coredns"
	I1210 07:49:26.132595 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:49:26.132657 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:49:26.159729 1061272 cri.go:89] found id: ""
	I1210 07:49:26.159779 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.159789 1061272 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:49:26.159797 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:49:26.159864 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:49:26.185308 1061272 cri.go:89] found id: ""
	I1210 07:49:26.185386 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.185409 1061272 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:49:26.185430 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:49:26.185524 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:49:26.210297 1061272 cri.go:89] found id: ""
	I1210 07:49:26.210364 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.210388 1061272 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:49:26.210409 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:49:26.210538 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:49:26.235247 1061272 cri.go:89] found id: ""
	I1210 07:49:26.235320 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.235341 1061272 logs.go:284] No container was found matching "kindnet"
	I1210 07:49:26.235352 1061272 logs.go:123] Gathering logs for kubelet ...
	I1210 07:49:26.235364 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:49:26.292545 1061272 logs.go:123] Gathering logs for dmesg ...
	I1210 07:49:26.292580 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:49:26.309666 1061272 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:49:26.309695 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:49:26.371886 1061272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:49:26.371909 1061272 logs.go:123] Gathering logs for containerd ...
	I1210 07:49:26.371922 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:49:26.414122 1061272 logs.go:123] Gathering logs for container status ...
	I1210 07:49:26.414158 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:49:26.443108 1061272 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:49:26.443165 1061272 out.go:285] * 
	W1210 07:49:26.443224 1061272 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.443242 1061272 out.go:285] * 
	W1210 07:49:26.445452 1061272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:49:26.452172 1061272 out.go:203] 
	W1210 07:49:26.455094 1061272 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.455136 1061272 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:49:26.455159 1061272 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:49:26.458257 1061272 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:41:06 no-preload-587009 containerd[758]: time="2025-12-10T07:41:06.789083088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.284850821Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.287114833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.296151055Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.297726981Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.295008191Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.297291871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.305236801Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.313440846Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.490269450Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.493135235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.503850918Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.504417343Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.559054031Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.561269122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.569663283Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.570266705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.618033993Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.620356878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.629513282Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.630204657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.276669096Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.278998807Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.285987103Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.286306090Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:29.031344    5660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:29.032564    5660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:29.034379    5660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:29.034781    5660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:29.036433    5660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:49:29 up  6:31,  0 user,  load average: 0.34, 1.01, 1.70
	Linux no-preload-587009 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 07:49:26 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:26 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:26 no-preload-587009 kubelet[5440]: E1210 07:49:26.866548    5440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:26 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:27 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 07:49:27 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:27 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:27 no-preload-587009 kubelet[5532]: E1210 07:49:27.601767    5532 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:27 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:27 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:28 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 07:49:28 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:28 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:28 no-preload-587009 kubelet[5568]: E1210 07:49:28.356367    5568 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:28 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:28 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:29 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 10 07:49:29 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:29 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:29 no-preload-587009 kubelet[5664]: E1210 07:49:29.103331    5664 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:29 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:29 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009: exit status 6 (330.506779ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:49:29.496995 1074353 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-587009" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-587009
helpers_test.go:244: (dbg) docker inspect no-preload-587009:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	        "Created": "2025-12-10T07:40:57.013300176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1061581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:40:57.085196071Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hosts",
	        "LogPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf-json.log",
	        "Name": "/no-preload-587009",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-587009:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-587009",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	                "LowerDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-587009",
	                "Source": "/var/lib/docker/volumes/no-preload-587009/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-587009",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-587009",
	                "name.minikube.sigs.k8s.io": "no-preload-587009",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8b83fbfc75ea1d8c820bf3d9633eb7375349335312aed9e093d5e02998fdbe5",
	            "SandboxKey": "/var/run/docker/netns/c8b83fbfc75e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-587009": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:b3:8b:9d:de:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93aee88a5c37e6ba01b74d7794f328193d01cfce9cb66379ab00b3f3e2e73a48",
	                    "EndpointID": "5db24e5622527f7835e680ba82c923c3693544dd67ec75d3b13b6f9a54598147",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-587009",
	                        "59d9b9413fb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009: exit status 6 (348.537058ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:49:29.865776 1074428 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-587009 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:39 UTC │
	│ start   │ -p cert-expiration-611923 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-611923       │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:38 UTC │
	│ delete  │ -p cert-expiration-611923                                                                                                                                                                                                                                  │ cert-expiration-611923       │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:38 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p default-k8s-diff-port-444518 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ start   │ -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p embed-certs-254586 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-254586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:41:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:41:22.419570 1064794 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:41:22.419770 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421060 1064794 out.go:374] Setting ErrFile to fd 2...
	I1210 07:41:22.421091 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421504 1064794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:41:22.422065 1064794 out.go:368] Setting JSON to false
	I1210 07:41:22.423214 1064794 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23007,"bootTime":1765329476,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:41:22.423323 1064794 start.go:143] virtualization:  
	I1210 07:41:22.427718 1064794 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:41:22.431355 1064794 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:41:22.431601 1064794 notify.go:221] Checking for updates...
	I1210 07:41:22.438163 1064794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:41:22.441491 1064794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:41:22.444751 1064794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:41:22.447902 1064794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:41:22.451150 1064794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:41:22.454892 1064794 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:22.454990 1064794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:41:22.505215 1064794 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:41:22.505348 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.599796 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.587493789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.599897 1064794 docker.go:319] overlay module found
	I1210 07:41:22.603335 1064794 out.go:179] * Using the docker driver based on user configuration
	I1210 07:41:22.606318 1064794 start.go:309] selected driver: docker
	I1210 07:41:22.606341 1064794 start.go:927] validating driver "docker" against <nil>
	I1210 07:41:22.606356 1064794 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:41:22.607143 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.691557 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.681931889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.691722 1064794 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 07:41:22.691759 1064794 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 07:41:22.691991 1064794 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:41:22.702590 1064794 out.go:179] * Using Docker driver with root privileges
	I1210 07:41:22.705586 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:22.705658 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:22.705667 1064794 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:41:22.705758 1064794 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:22.709067 1064794 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:41:22.711942 1064794 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:41:22.714970 1064794 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:41:22.717872 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:22.717929 1064794 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:41:22.717943 1064794 cache.go:65] Caching tarball of preloaded images
	I1210 07:41:22.718049 1064794 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:41:22.718059 1064794 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:41:22.718202 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:22.718221 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json: {Name:mk35831d9cdfb4ee294c317ea1250d3c633e2dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:22.718581 1064794 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:41:22.747190 1064794 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:41:22.747214 1064794 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:41:22.747228 1064794 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:41:22.747259 1064794 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:41:22.747944 1064794 start.go:364] duration metric: took 667.063µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:41:22.747984 1064794 start.go:93] Provisioning new machine with config: &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:41:22.748068 1064794 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:41:21.330818 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:21.932613 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:22.466827 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:22.874611 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:22.875806 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:22.880368 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:22.884509 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:41:22.884617 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:22.884696 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:22.885635 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:22.908007 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:22.908118 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:22.919248 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:22.924362 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:22.924418 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:23.105604 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:23.105729 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:41:22.751504 1064794 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:41:22.751752 1064794 start.go:159] libmachine.API.Create for "newest-cni-237317" (driver="docker")
	I1210 07:41:22.751794 1064794 client.go:173] LocalClient.Create starting
	I1210 07:41:22.751869 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 07:41:22.751907 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.751924 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.751982 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 07:41:22.751999 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.752011 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.752421 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:41:22.771298 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:41:22.771401 1064794 network_create.go:284] running [docker network inspect newest-cni-237317] to gather additional debugging logs...
	I1210 07:41:22.771420 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317
	W1210 07:41:22.796107 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 returned with exit code 1
	I1210 07:41:22.796138 1064794 network_create.go:287] error running [docker network inspect newest-cni-237317]: docker network inspect newest-cni-237317: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-237317 not found
	I1210 07:41:22.796157 1064794 network_create.go:289] output of [docker network inspect newest-cni-237317]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-237317 not found
	
	** /stderr **
	I1210 07:41:22.796260 1064794 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:22.817585 1064794 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 07:41:22.818052 1064794 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 07:41:22.818535 1064794 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 07:41:22.819200 1064794 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189ad90}
	I1210 07:41:22.819255 1064794 network_create.go:124] attempt to create docker network newest-cni-237317 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:41:22.819429 1064794 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-237317 newest-cni-237317
	I1210 07:41:22.888302 1064794 network_create.go:108] docker network newest-cni-237317 192.168.76.0/24 created
	I1210 07:41:22.888339 1064794 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-237317" container
	I1210 07:41:22.888413 1064794 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:41:22.905595 1064794 cli_runner.go:164] Run: docker volume create newest-cni-237317 --label name.minikube.sigs.k8s.io=newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:41:22.928697 1064794 oci.go:103] Successfully created a docker volume newest-cni-237317
	I1210 07:41:22.928792 1064794 cli_runner.go:164] Run: docker run --rm --name newest-cni-237317-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --entrypoint /usr/bin/test -v newest-cni-237317:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:41:23.496836 1064794 oci.go:107] Successfully prepared a docker volume newest-cni-237317
	I1210 07:41:23.496907 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:23.496920 1064794 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:41:23.497004 1064794 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:41:27.695198 1064794 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.198140987s)
	I1210 07:41:27.695236 1064794 kic.go:203] duration metric: took 4.198307373s to extract preloaded images to volume ...
	W1210 07:41:27.695375 1064794 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:41:27.695491 1064794 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:41:27.749812 1064794 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-237317 --name newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-237317 --network newest-cni-237317 --ip 192.168.76.2 --volume newest-cni-237317:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:41:28.033415 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Running}}
	I1210 07:41:28.055793 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.079642 1064794 cli_runner.go:164] Run: docker exec newest-cni-237317 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:41:28.133214 1064794 oci.go:144] the created container "newest-cni-237317" has a running status.
	I1210 07:41:28.133248 1064794 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa...
	I1210 07:41:28.633820 1064794 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:41:28.653829 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.671371 1064794 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:41:28.671397 1064794 kic_runner.go:114] Args: [docker exec --privileged newest-cni-237317 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:41:28.713692 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.729850 1064794 machine.go:94] provisionDockerMachine start ...
	I1210 07:41:28.729960 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:28.748329 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:28.748679 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:28.748697 1064794 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:41:28.749343 1064794 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:41:31.886152 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:31.886178 1064794 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:41:31.886283 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:31.903879 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:31.904204 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:31.904222 1064794 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:41:32.048555 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:32.048637 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.067055 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:32.067377 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:32.067401 1064794 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:41:32.202608 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:41:32.202637 1064794 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:41:32.202667 1064794 ubuntu.go:190] setting up certificates
	I1210 07:41:32.202678 1064794 provision.go:84] configureAuth start
	I1210 07:41:32.202744 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.219337 1064794 provision.go:143] copyHostCerts
	I1210 07:41:32.219404 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:41:32.219420 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:41:32.219497 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:41:32.219602 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:41:32.219616 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:41:32.219646 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:41:32.219709 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:41:32.219718 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:41:32.219745 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:41:32.219807 1064794 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:41:32.533791 1064794 provision.go:177] copyRemoteCerts
	I1210 07:41:32.533865 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:41:32.533934 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.551601 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.650073 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:41:32.667141 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:41:32.684669 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:41:32.702081 1064794 provision.go:87] duration metric: took 499.382435ms to configureAuth
	I1210 07:41:32.702111 1064794 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:41:32.702312 1064794 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:32.702326 1064794 machine.go:97] duration metric: took 3.972452975s to provisionDockerMachine
	I1210 07:41:32.702334 1064794 client.go:176] duration metric: took 9.950533371s to LocalClient.Create
	I1210 07:41:32.702347 1064794 start.go:167] duration metric: took 9.950596741s to libmachine.API.Create "newest-cni-237317"
	I1210 07:41:32.702357 1064794 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:41:32.702367 1064794 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:41:32.702426 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:41:32.702514 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.718852 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.814355 1064794 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:41:32.817769 1064794 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:41:32.817798 1064794 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:41:32.817811 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:41:32.817871 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:41:32.817953 1064794 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:41:32.818081 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:41:32.825310 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:32.842517 1064794 start.go:296] duration metric: took 140.145403ms for postStartSetup
	I1210 07:41:32.842887 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.859215 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:32.859502 1064794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:41:32.859553 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.875883 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.967611 1064794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:41:32.972359 1064794 start.go:128] duration metric: took 10.224272788s to createHost
	I1210 07:41:32.972384 1064794 start.go:83] releasing machines lock for "newest-cni-237317", held for 10.224421419s
	I1210 07:41:32.972457 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.990273 1064794 ssh_runner.go:195] Run: cat /version.json
	I1210 07:41:32.990351 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.990655 1064794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:41:32.990729 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:33.013202 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.031539 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.114754 1064794 ssh_runner.go:195] Run: systemctl --version
	I1210 07:41:33.211079 1064794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:41:33.215428 1064794 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:41:33.215545 1064794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:41:33.242581 1064794 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:41:33.242622 1064794 start.go:496] detecting cgroup driver to use...
	I1210 07:41:33.242657 1064794 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:41:33.242740 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:41:33.257818 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:41:33.270562 1064794 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:41:33.270659 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:41:33.288766 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:41:33.307284 1064794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:41:33.417555 1064794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:41:33.559224 1064794 docker.go:234] disabling docker service ...
	I1210 07:41:33.559382 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:41:33.583026 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:41:33.596320 1064794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:41:33.714101 1064794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:41:33.838575 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:41:33.853369 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:41:33.868162 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:41:33.876869 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:41:33.885636 1064794 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:41:33.885711 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:41:33.894404 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.903504 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:41:33.912288 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.920951 1064794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:41:33.929214 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:41:33.938205 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:41:33.947047 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:41:33.955864 1064794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:41:33.963242 1064794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:41:33.970548 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.113023 1064794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:41:34.252751 1064794 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:41:34.252855 1064794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:41:34.256875 1064794 start.go:564] Will wait 60s for crictl version
	I1210 07:41:34.256993 1064794 ssh_runner.go:195] Run: which crictl
	I1210 07:41:34.260563 1064794 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:41:34.285437 1064794 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:41:34.285530 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.307510 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.335239 1064794 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:41:34.338330 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:34.356185 1064794 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:41:34.360231 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.373151 1064794 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:41:34.376063 1064794 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:41:34.376220 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:34.376306 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.404402 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.404424 1064794 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:41:34.404484 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.432485 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.432510 1064794 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:41:34.432518 1064794 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:41:34.432610 1064794 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:41:34.432688 1064794 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:41:34.457473 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:34.457499 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:34.457517 1064794 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:41:34.457543 1064794 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:41:34.457665 1064794 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:41:34.457735 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:41:34.465701 1064794 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:41:34.465807 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:41:34.473755 1064794 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:41:34.486983 1064794 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:41:34.499868 1064794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:41:34.513272 1064794 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:41:34.517130 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.527569 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.663379 1064794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:41:34.680375 1064794 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:41:34.680450 1064794 certs.go:195] generating shared ca certs ...
	I1210 07:41:34.680483 1064794 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.680674 1064794 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:41:34.680764 1064794 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:41:34.680789 1064794 certs.go:257] generating profile certs ...
	I1210 07:41:34.680884 1064794 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:41:34.680928 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt with IP's: []
	I1210 07:41:34.839451 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt ...
	I1210 07:41:34.839486 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt: {Name:mk864b17e4815ee03fc5eadc45f8f3d330d86e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.839718 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key ...
	I1210 07:41:34.839736 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key: {Name:mkac75ec3f8c520b4be98288202003aea88a7881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.840557 1064794 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:41:34.840584 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 07:41:34.941668 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f ...
	I1210 07:41:34.941702 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f: {Name:mkd71b8623c8311dc88c663a4045d0b1945deec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941880 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f ...
	I1210 07:41:34.941896 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f: {Name:mk237d8326178abb6dfc7e4dd919116ec45ea9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941986 1064794 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt
	I1210 07:41:34.942080 1064794 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key
	I1210 07:41:34.942146 1064794 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:41:34.942168 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt with IP's: []
	I1210 07:41:35.425873 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt ...
	I1210 07:41:35.425915 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt: {Name:mk51f419728d59ba7ab729d028e45d36640d0231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426770 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key ...
	I1210 07:41:35.426789 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key: {Name:mkb1e5352ebb3c5d51e6e8aed5c36263957e6d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426995 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:41:35.427044 1064794 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:41:35.427053 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:41:35.427086 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:41:35.427120 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:41:35.427152 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:41:35.427211 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:35.427806 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:41:35.447027 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:41:35.465831 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:41:35.484250 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:41:35.503105 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:41:35.521550 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:41:35.540269 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:41:35.563542 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:41:35.585738 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:41:35.609714 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:41:35.628843 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:41:35.647443 1064794 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:41:35.660385 1064794 ssh_runner.go:195] Run: openssl version
	I1210 07:41:35.666967 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.674504 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:41:35.682195 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.685889 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.686015 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.727961 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:41:35.735327 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 07:41:35.742756 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.750368 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:41:35.757661 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761188 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761251 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.802868 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.810435 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.817738 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.825308 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:41:35.832635 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836310 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836372 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.877289 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:41:35.884982 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:41:35.892614 1064794 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:41:35.896331 1064794 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:41:35.896383 1064794 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:35.896476 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:41:35.896540 1064794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:41:35.926036 1064794 cri.go:89] found id: ""
	I1210 07:41:35.926112 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:41:35.934414 1064794 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:41:35.942276 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:41:35.942375 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:41:35.950211 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:41:35.950233 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:41:35.950309 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:41:35.957845 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:41:35.957962 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:41:35.966039 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:41:35.973562 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:41:35.973662 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:41:35.980914 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.988697 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:41:35.988772 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.996172 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:41:36.005049 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:41:36.005181 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:41:36.014173 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:41:36.062406 1064794 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:41:36.062713 1064794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:41:36.147250 1064794 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:41:36.147383 1064794 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:41:36.147459 1064794 kubeadm.go:319] OS: Linux
	I1210 07:41:36.147537 1064794 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:41:36.147617 1064794 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:41:36.147688 1064794 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:41:36.147759 1064794 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:41:36.147834 1064794 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:41:36.147902 1064794 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:41:36.147977 1064794 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:41:36.148046 1064794 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:41:36.148124 1064794 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:41:36.223682 1064794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:41:36.223913 1064794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:41:36.224078 1064794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:41:36.230559 1064794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:41:36.237069 1064794 out.go:252]   - Generating certificates and keys ...
	I1210 07:41:36.237182 1064794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:41:36.237257 1064794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:41:36.476610 1064794 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:41:36.561778 1064794 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:41:36.854281 1064794 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:41:37.263690 1064794 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:41:37.370103 1064794 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:41:37.370484 1064794 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:37.933573 1064794 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:41:37.934013 1064794 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:38.192710 1064794 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:41:38.352643 1064794 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:41:38.587081 1064794 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:41:38.587306 1064794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:41:38.909718 1064794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:41:39.048089 1064794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:39.097056 1064794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:39.169471 1064794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:39.365961 1064794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:39.366635 1064794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:39.369209 1064794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:39.372794 1064794 out.go:252]   - Booting up control plane ...
	I1210 07:41:39.372894 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:39.372969 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:39.374027 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:39.390260 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:39.390694 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:39.397555 1064794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:39.397883 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:39.397929 1064794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:39.536450 1064794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:39.536565 1064794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:45:23.105552 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000403225s
	I1210 07:45:23.105596 1061272 kubeadm.go:319] 
	I1210 07:45:23.105659 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:23.105695 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:23.105810 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:23.105817 1061272 kubeadm.go:319] 
	I1210 07:45:23.105931 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:23.105968 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:23.106003 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:23.106008 1061272 kubeadm.go:319] 
	I1210 07:45:23.110089 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.110529 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.110638 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:23.110873 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:23.110878 1061272 kubeadm.go:319] 
	I1210 07:45:23.110946 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:23.111048 1061272 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000403225s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:23.111129 1061272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:23.528980 1061272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:23.543064 1061272 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:23.543133 1061272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:23.552680 1061272 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:23.552702 1061272 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:23.552757 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:23.561132 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:23.561196 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:23.569220 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:23.577552 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:23.577617 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:23.585736 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.594195 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:23.594261 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.602367 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:23.610802 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:23.610868 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:23.618934 1061272 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:23.738244 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.738666 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.820302 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.537616 1064794 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001129575s
	I1210 07:45:39.537650 1064794 kubeadm.go:319] 
	I1210 07:45:39.537709 1064794 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:39.537747 1064794 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:39.537857 1064794 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:39.537866 1064794 kubeadm.go:319] 
	I1210 07:45:39.537971 1064794 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:39.538008 1064794 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:39.538043 1064794 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:39.538052 1064794 kubeadm.go:319] 
	I1210 07:45:39.542860 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:39.543622 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:39.543819 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.544243 1064794 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:39.544260 1064794 kubeadm.go:319] 
	I1210 07:45:39.544379 1064794 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:39.544512 1064794 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129575s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:39.544665 1064794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:40.003717 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:40.026427 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:40.026565 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:40.036588 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:40.036615 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:40.036678 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:40.045938 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:40.046015 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:40.054590 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:40.063126 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:40.063204 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:40.071408 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.079679 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:40.079771 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.088102 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:40.097134 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:40.097216 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:40.105436 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:40.222290 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:40.222807 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:40.298915 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:49:26.015309 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:49:26.015352 1061272 kubeadm.go:319] 
	I1210 07:49:26.015478 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:49:26.021506 1061272 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:49:26.021573 1061272 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:49:26.021669 1061272 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:49:26.021735 1061272 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:49:26.021780 1061272 kubeadm.go:319] OS: Linux
	I1210 07:49:26.021833 1061272 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:49:26.021898 1061272 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:49:26.021954 1061272 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:49:26.022012 1061272 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:49:26.022072 1061272 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:49:26.022130 1061272 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:49:26.022183 1061272 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:49:26.022239 1061272 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:49:26.022294 1061272 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:49:26.022377 1061272 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:49:26.022520 1061272 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:49:26.022665 1061272 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:49:26.022797 1061272 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:49:26.025625 1061272 out.go:252]   - Generating certificates and keys ...
	I1210 07:49:26.025738 1061272 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:49:26.025820 1061272 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:49:26.025909 1061272 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:49:26.025981 1061272 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:49:26.026084 1061272 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:49:26.026145 1061272 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:49:26.026218 1061272 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:49:26.026288 1061272 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:49:26.026372 1061272 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:49:26.026456 1061272 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:49:26.026527 1061272 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:49:26.026596 1061272 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:49:26.026658 1061272 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:49:26.026731 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:49:26.026814 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:49:26.026910 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:49:26.027000 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:49:26.027123 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:49:26.027217 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:49:26.032204 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:49:26.032327 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:49:26.032449 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:49:26.032535 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:49:26.032660 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:49:26.032760 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:49:26.032871 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:49:26.032963 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:49:26.033008 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:49:26.033144 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:49:26.033252 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:49:26.033319 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00018658s
	I1210 07:49:26.033356 1061272 kubeadm.go:319] 
	I1210 07:49:26.033430 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:49:26.033471 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:49:26.033578 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:49:26.033591 1061272 kubeadm.go:319] 
	I1210 07:49:26.033695 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:49:26.033732 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:49:26.033765 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:49:26.033838 1061272 kubeadm.go:403] duration metric: took 8m9.047256448s to StartCluster
	I1210 07:49:26.033878 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:49:26.033967 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:49:26.034180 1061272 kubeadm.go:319] 
	I1210 07:49:26.078012 1061272 cri.go:89] found id: ""
	I1210 07:49:26.078053 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.078063 1061272 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:49:26.078088 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:49:26.078174 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:49:26.106609 1061272 cri.go:89] found id: ""
	I1210 07:49:26.106637 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.106653 1061272 logs.go:284] No container was found matching "etcd"
	I1210 07:49:26.106660 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:49:26.106763 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:49:26.132553 1061272 cri.go:89] found id: ""
	I1210 07:49:26.132579 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.132589 1061272 logs.go:284] No container was found matching "coredns"
	I1210 07:49:26.132595 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:49:26.132657 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:49:26.159729 1061272 cri.go:89] found id: ""
	I1210 07:49:26.159779 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.159789 1061272 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:49:26.159797 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:49:26.159864 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:49:26.185308 1061272 cri.go:89] found id: ""
	I1210 07:49:26.185386 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.185409 1061272 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:49:26.185430 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:49:26.185524 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:49:26.210297 1061272 cri.go:89] found id: ""
	I1210 07:49:26.210364 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.210388 1061272 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:49:26.210409 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:49:26.210538 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:49:26.235247 1061272 cri.go:89] found id: ""
	I1210 07:49:26.235320 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.235341 1061272 logs.go:284] No container was found matching "kindnet"
	I1210 07:49:26.235352 1061272 logs.go:123] Gathering logs for kubelet ...
	I1210 07:49:26.235364 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:49:26.292545 1061272 logs.go:123] Gathering logs for dmesg ...
	I1210 07:49:26.292580 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:49:26.309666 1061272 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:49:26.309695 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:49:26.371886 1061272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:49:26.371909 1061272 logs.go:123] Gathering logs for containerd ...
	I1210 07:49:26.371922 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:49:26.414122 1061272 logs.go:123] Gathering logs for container status ...
	I1210 07:49:26.414158 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:49:26.443108 1061272 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:49:26.443165 1061272 out.go:285] * 
	W1210 07:49:26.443224 1061272 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.443242 1061272 out.go:285] * 
	W1210 07:49:26.445452 1061272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:49:26.452172 1061272 out.go:203] 
	W1210 07:49:26.455094 1061272 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.455136 1061272 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:49:26.455159 1061272 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:49:26.458257 1061272 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:41:06 no-preload-587009 containerd[758]: time="2025-12-10T07:41:06.789083088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.284850821Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.287114833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.296151055Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.297726981Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.295008191Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.297291871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.305236801Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.313440846Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.490269450Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.493135235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.503850918Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.504417343Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.559054031Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.561269122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.569663283Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.570266705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.618033993Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.620356878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.629513282Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.630204657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.276669096Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.278998807Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.285987103Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.286306090Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:30.544214    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:30.544989    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:30.546717    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:30.547290    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:30.548434    5793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:49:30 up  6:31,  0 user,  load average: 0.55, 1.04, 1.71
	Linux no-preload-587009 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:49:27 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:28 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 07:49:28 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:28 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:28 no-preload-587009 kubelet[5568]: E1210 07:49:28.356367    5568 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:28 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:28 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:29 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 10 07:49:29 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:29 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:29 no-preload-587009 kubelet[5664]: E1210 07:49:29.103331    5664 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:29 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:29 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:29 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 10 07:49:29 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:29 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:29 no-preload-587009 kubelet[5707]: E1210 07:49:29.841771    5707 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:29 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:29 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:49:30 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 10 07:49:30 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:30 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:49:30 no-preload-587009 kubelet[5797]: E1210 07:49:30.611301    5797 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:49:30 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:49:30 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009: exit status 6 (319.259898ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:49:30.987216 1074658 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-587009" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (97.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.425768478s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-587009 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-587009 describe deploy/metrics-server -n kube-system: exit status 1 (54.135686ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-587009" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-587009 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-587009
helpers_test.go:244: (dbg) docker inspect no-preload-587009:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	        "Created": "2025-12-10T07:40:57.013300176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1061581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:40:57.085196071Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hosts",
	        "LogPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf-json.log",
	        "Name": "/no-preload-587009",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-587009:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-587009",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	                "LowerDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-587009",
	                "Source": "/var/lib/docker/volumes/no-preload-587009/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-587009",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-587009",
	                "name.minikube.sigs.k8s.io": "no-preload-587009",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8b83fbfc75ea1d8c820bf3d9633eb7375349335312aed9e093d5e02998fdbe5",
	            "SandboxKey": "/var/run/docker/netns/c8b83fbfc75e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-587009": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:b3:8b:9d:de:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93aee88a5c37e6ba01b74d7794f328193d01cfce9cb66379ab00b3f3e2e73a48",
	                    "EndpointID": "5db24e5622527f7835e680ba82c923c3693544dd67ec75d3b13b6f9a54598147",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-587009",
	                        "59d9b9413fb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009: exit status 6 (301.876164ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:51:07.786328 1076826 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-587009 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p cert-expiration-611923                                                                                                                                                                                                                                  │ cert-expiration-611923       │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:38 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:38 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p default-k8s-diff-port-444518 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ start   │ -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p embed-certs-254586 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-254586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-237317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:41:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:41:22.419570 1064794 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:41:22.419770 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421060 1064794 out.go:374] Setting ErrFile to fd 2...
	I1210 07:41:22.421091 1064794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:41:22.421504 1064794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:41:22.422065 1064794 out.go:368] Setting JSON to false
	I1210 07:41:22.423214 1064794 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23007,"bootTime":1765329476,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:41:22.423323 1064794 start.go:143] virtualization:  
	I1210 07:41:22.427718 1064794 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:41:22.431355 1064794 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:41:22.431601 1064794 notify.go:221] Checking for updates...
	I1210 07:41:22.438163 1064794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:41:22.441491 1064794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:41:22.444751 1064794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:41:22.447902 1064794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:41:22.451150 1064794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:41:22.454892 1064794 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:22.454990 1064794 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:41:22.505215 1064794 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:41:22.505348 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.599796 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.587493789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.599897 1064794 docker.go:319] overlay module found
	I1210 07:41:22.603335 1064794 out.go:179] * Using the docker driver based on user configuration
	I1210 07:41:22.606318 1064794 start.go:309] selected driver: docker
	I1210 07:41:22.606341 1064794 start.go:927] validating driver "docker" against <nil>
	I1210 07:41:22.606356 1064794 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:41:22.607143 1064794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:41:22.691557 1064794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-10 07:41:22.681931889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:41:22.691722 1064794 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 07:41:22.691759 1064794 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 07:41:22.691991 1064794 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:41:22.702590 1064794 out.go:179] * Using Docker driver with root privileges
	I1210 07:41:22.705586 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:22.705658 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:22.705667 1064794 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:41:22.705758 1064794 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:22.709067 1064794 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:41:22.711942 1064794 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:41:22.714970 1064794 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:41:22.717872 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:22.717929 1064794 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:41:22.717943 1064794 cache.go:65] Caching tarball of preloaded images
	I1210 07:41:22.718049 1064794 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:41:22.718059 1064794 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:41:22.718202 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:22.718221 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json: {Name:mk35831d9cdfb4ee294c317ea1250d3c633e2dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:22.718581 1064794 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:41:22.747190 1064794 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:41:22.747214 1064794 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:41:22.747228 1064794 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:41:22.747259 1064794 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:41:22.747944 1064794 start.go:364] duration metric: took 667.063µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:41:22.747984 1064794 start.go:93] Provisioning new machine with config: &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:41:22.748068 1064794 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:41:21.330818 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:21.932613 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:22.466827 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:22.874611 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:22.875806 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:22.880368 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:22.884509 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:41:22.884617 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:22.884696 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:22.885635 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:22.908007 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:22.908118 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:22.919248 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:22.924362 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:22.924418 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:23.105604 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:23.105729 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:41:22.751504 1064794 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:41:22.751752 1064794 start.go:159] libmachine.API.Create for "newest-cni-237317" (driver="docker")
	I1210 07:41:22.751794 1064794 client.go:173] LocalClient.Create starting
	I1210 07:41:22.751869 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 07:41:22.751907 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.751924 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.751982 1064794 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 07:41:22.751999 1064794 main.go:143] libmachine: Decoding PEM data...
	I1210 07:41:22.752011 1064794 main.go:143] libmachine: Parsing certificate...
	I1210 07:41:22.752421 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:41:22.771298 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:41:22.771401 1064794 network_create.go:284] running [docker network inspect newest-cni-237317] to gather additional debugging logs...
	I1210 07:41:22.771420 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317
	W1210 07:41:22.796107 1064794 cli_runner.go:211] docker network inspect newest-cni-237317 returned with exit code 1
	I1210 07:41:22.796138 1064794 network_create.go:287] error running [docker network inspect newest-cni-237317]: docker network inspect newest-cni-237317: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-237317 not found
	I1210 07:41:22.796157 1064794 network_create.go:289] output of [docker network inspect newest-cni-237317]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-237317 not found
	
	** /stderr **
	I1210 07:41:22.796260 1064794 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:22.817585 1064794 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 07:41:22.818052 1064794 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 07:41:22.818535 1064794 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 07:41:22.819200 1064794 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189ad90}
	I1210 07:41:22.819255 1064794 network_create.go:124] attempt to create docker network newest-cni-237317 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:41:22.819429 1064794 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-237317 newest-cni-237317
	I1210 07:41:22.888302 1064794 network_create.go:108] docker network newest-cni-237317 192.168.76.0/24 created
	I1210 07:41:22.888339 1064794 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-237317" container
	I1210 07:41:22.888413 1064794 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:41:22.905595 1064794 cli_runner.go:164] Run: docker volume create newest-cni-237317 --label name.minikube.sigs.k8s.io=newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:41:22.928697 1064794 oci.go:103] Successfully created a docker volume newest-cni-237317
	I1210 07:41:22.928792 1064794 cli_runner.go:164] Run: docker run --rm --name newest-cni-237317-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --entrypoint /usr/bin/test -v newest-cni-237317:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:41:23.496836 1064794 oci.go:107] Successfully prepared a docker volume newest-cni-237317
	I1210 07:41:23.496907 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:23.496920 1064794 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:41:23.497004 1064794 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:41:27.695198 1064794 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-237317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.198140987s)
	I1210 07:41:27.695236 1064794 kic.go:203] duration metric: took 4.198307373s to extract preloaded images to volume ...
	W1210 07:41:27.695375 1064794 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:41:27.695491 1064794 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:41:27.749812 1064794 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-237317 --name newest-cni-237317 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-237317 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-237317 --network newest-cni-237317 --ip 192.168.76.2 --volume newest-cni-237317:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:41:28.033415 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Running}}
	I1210 07:41:28.055793 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.079642 1064794 cli_runner.go:164] Run: docker exec newest-cni-237317 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:41:28.133214 1064794 oci.go:144] the created container "newest-cni-237317" has a running status.
	I1210 07:41:28.133248 1064794 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa...
	I1210 07:41:28.633820 1064794 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:41:28.653829 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.671371 1064794 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:41:28.671397 1064794 kic_runner.go:114] Args: [docker exec --privileged newest-cni-237317 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:41:28.713692 1064794 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:41:28.729850 1064794 machine.go:94] provisionDockerMachine start ...
	I1210 07:41:28.729960 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:28.748329 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:28.748679 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:28.748697 1064794 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:41:28.749343 1064794 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:41:31.886152 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:31.886178 1064794 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:41:31.886283 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:31.903879 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:31.904204 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:31.904222 1064794 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:41:32.048555 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:41:32.048637 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.067055 1064794 main.go:143] libmachine: Using SSH client type: native
	I1210 07:41:32.067377 1064794 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33835 <nil> <nil>}
	I1210 07:41:32.067401 1064794 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:41:32.202608 1064794 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:41:32.202637 1064794 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:41:32.202667 1064794 ubuntu.go:190] setting up certificates
	I1210 07:41:32.202678 1064794 provision.go:84] configureAuth start
	I1210 07:41:32.202744 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.219337 1064794 provision.go:143] copyHostCerts
	I1210 07:41:32.219404 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:41:32.219420 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:41:32.219497 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:41:32.219602 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:41:32.219616 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:41:32.219646 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:41:32.219709 1064794 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:41:32.219718 1064794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:41:32.219745 1064794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:41:32.219807 1064794 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:41:32.533791 1064794 provision.go:177] copyRemoteCerts
	I1210 07:41:32.533865 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:41:32.533934 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.551601 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.650073 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:41:32.667141 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:41:32.684669 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:41:32.702081 1064794 provision.go:87] duration metric: took 499.382435ms to configureAuth
	I1210 07:41:32.702111 1064794 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:41:32.702312 1064794 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:41:32.702326 1064794 machine.go:97] duration metric: took 3.972452975s to provisionDockerMachine
	I1210 07:41:32.702334 1064794 client.go:176] duration metric: took 9.950533371s to LocalClient.Create
	I1210 07:41:32.702347 1064794 start.go:167] duration metric: took 9.950596741s to libmachine.API.Create "newest-cni-237317"
	I1210 07:41:32.702357 1064794 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:41:32.702367 1064794 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:41:32.702426 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:41:32.702514 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.718852 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.814355 1064794 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:41:32.817769 1064794 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:41:32.817798 1064794 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:41:32.817811 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:41:32.817871 1064794 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:41:32.817953 1064794 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:41:32.818081 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:41:32.825310 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:32.842517 1064794 start.go:296] duration metric: took 140.145403ms for postStartSetup
	I1210 07:41:32.842887 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.859215 1064794 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:41:32.859502 1064794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:41:32.859553 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.875883 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:32.967611 1064794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:41:32.972359 1064794 start.go:128] duration metric: took 10.224272788s to createHost
	I1210 07:41:32.972384 1064794 start.go:83] releasing machines lock for "newest-cni-237317", held for 10.224421419s
	I1210 07:41:32.972457 1064794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:41:32.990273 1064794 ssh_runner.go:195] Run: cat /version.json
	I1210 07:41:32.990351 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:32.990655 1064794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:41:32.990729 1064794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:41:33.013202 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.031539 1064794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33835 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:41:33.114754 1064794 ssh_runner.go:195] Run: systemctl --version
	I1210 07:41:33.211079 1064794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:41:33.215428 1064794 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:41:33.215545 1064794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:41:33.242581 1064794 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:41:33.242622 1064794 start.go:496] detecting cgroup driver to use...
	I1210 07:41:33.242657 1064794 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:41:33.242740 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:41:33.257818 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:41:33.270562 1064794 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:41:33.270659 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:41:33.288766 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:41:33.307284 1064794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:41:33.417555 1064794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:41:33.559224 1064794 docker.go:234] disabling docker service ...
	I1210 07:41:33.559382 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:41:33.583026 1064794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:41:33.596320 1064794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:41:33.714101 1064794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:41:33.838575 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:41:33.853369 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:41:33.868162 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:41:33.876869 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:41:33.885636 1064794 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:41:33.885711 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:41:33.894404 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.903504 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:41:33.912288 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:41:33.920951 1064794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:41:33.929214 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:41:33.938205 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:41:33.947047 1064794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:41:33.955864 1064794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:41:33.963242 1064794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:41:33.970548 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.113023 1064794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:41:34.252751 1064794 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:41:34.252855 1064794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:41:34.256875 1064794 start.go:564] Will wait 60s for crictl version
	I1210 07:41:34.256993 1064794 ssh_runner.go:195] Run: which crictl
	I1210 07:41:34.260563 1064794 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:41:34.285437 1064794 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:41:34.285530 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.307510 1064794 ssh_runner.go:195] Run: containerd --version
	I1210 07:41:34.335239 1064794 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:41:34.338330 1064794 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:41:34.356185 1064794 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:41:34.360231 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.373151 1064794 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:41:34.376063 1064794 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:41:34.376220 1064794 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:41:34.376306 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.404402 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.404424 1064794 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:41:34.404484 1064794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:41:34.432485 1064794 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:41:34.432510 1064794 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:41:34.432518 1064794 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:41:34.432610 1064794 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:41:34.432688 1064794 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:41:34.457473 1064794 cni.go:84] Creating CNI manager for ""
	I1210 07:41:34.457499 1064794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:41:34.457517 1064794 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:41:34.457543 1064794 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:41:34.457665 1064794 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:41:34.457735 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:41:34.465701 1064794 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:41:34.465807 1064794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:41:34.473755 1064794 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:41:34.486983 1064794 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:41:34.499868 1064794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:41:34.513272 1064794 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:41:34.517130 1064794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:41:34.527569 1064794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:41:34.663379 1064794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:41:34.680375 1064794 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:41:34.680450 1064794 certs.go:195] generating shared ca certs ...
	I1210 07:41:34.680483 1064794 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.680674 1064794 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:41:34.680764 1064794 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:41:34.680789 1064794 certs.go:257] generating profile certs ...
	I1210 07:41:34.680884 1064794 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:41:34.680928 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt with IP's: []
	I1210 07:41:34.839451 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt ...
	I1210 07:41:34.839486 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.crt: {Name:mk864b17e4815ee03fc5eadc45f8f3d330d86e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.839718 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key ...
	I1210 07:41:34.839736 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key: {Name:mkac75ec3f8c520b4be98288202003aea88a7881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.840557 1064794 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:41:34.840584 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 07:41:34.941668 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f ...
	I1210 07:41:34.941702 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f: {Name:mkd71b8623c8311dc88c663a4045d0b1945deec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941880 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f ...
	I1210 07:41:34.941896 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f: {Name:mk237d8326178abb6dfc7e4dd919116ec45ea9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:34.941986 1064794 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt
	I1210 07:41:34.942080 1064794 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key
	I1210 07:41:34.942146 1064794 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:41:34.942168 1064794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt with IP's: []
	I1210 07:41:35.425873 1064794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt ...
	I1210 07:41:35.425915 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt: {Name:mk51f419728d59ba7ab729d028e45d36640d0231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426770 1064794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key ...
	I1210 07:41:35.426789 1064794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key: {Name:mkb1e5352ebb3c5d51e6e8aed5c36263957e6d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:41:35.426995 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:41:35.427044 1064794 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:41:35.427053 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:41:35.427086 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:41:35.427120 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:41:35.427152 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:41:35.427211 1064794 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:41:35.427806 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:41:35.447027 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:41:35.465831 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:41:35.484250 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:41:35.503105 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:41:35.521550 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:41:35.540269 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:41:35.563542 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:41:35.585738 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:41:35.609714 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:41:35.628843 1064794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:41:35.647443 1064794 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:41:35.660385 1064794 ssh_runner.go:195] Run: openssl version
	I1210 07:41:35.666967 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.674504 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:41:35.682195 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.685889 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.686015 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:41:35.727961 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:41:35.735327 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 07:41:35.742756 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.750368 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:41:35.757661 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761188 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.761251 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:41:35.802868 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.810435 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:41:35.817738 1064794 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.825308 1064794 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:41:35.832635 1064794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836310 1064794 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.836372 1064794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:41:35.877289 1064794 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:41:35.884982 1064794 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:41:35.892614 1064794 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:41:35.896331 1064794 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:41:35.896383 1064794 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:41:35.896476 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:41:35.896540 1064794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:41:35.926036 1064794 cri.go:89] found id: ""
	I1210 07:41:35.926112 1064794 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:41:35.934414 1064794 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:41:35.942276 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:41:35.942375 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:41:35.950211 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:41:35.950233 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:41:35.950309 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:41:35.957845 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:41:35.957962 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:41:35.966039 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:41:35.973562 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:41:35.973662 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:41:35.980914 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.988697 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:41:35.988772 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:41:35.996172 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:41:36.005049 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:41:36.005181 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:41:36.014173 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:41:36.062406 1064794 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:41:36.062713 1064794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:41:36.147250 1064794 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:41:36.147383 1064794 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:41:36.147459 1064794 kubeadm.go:319] OS: Linux
	I1210 07:41:36.147537 1064794 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:41:36.147617 1064794 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:41:36.147688 1064794 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:41:36.147759 1064794 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:41:36.147834 1064794 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:41:36.147902 1064794 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:41:36.147977 1064794 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:41:36.148046 1064794 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:41:36.148124 1064794 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:41:36.223682 1064794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:41:36.223913 1064794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:41:36.224078 1064794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:41:36.230559 1064794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:41:36.237069 1064794 out.go:252]   - Generating certificates and keys ...
	I1210 07:41:36.237182 1064794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:41:36.237257 1064794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:41:36.476610 1064794 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:41:36.561778 1064794 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:41:36.854281 1064794 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:41:37.263690 1064794 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:41:37.370103 1064794 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:41:37.370484 1064794 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:37.933573 1064794 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:41:37.934013 1064794 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:41:38.192710 1064794 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:41:38.352643 1064794 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:41:38.587081 1064794 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:41:38.587306 1064794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:41:38.909718 1064794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:41:39.048089 1064794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:41:39.097056 1064794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:41:39.169471 1064794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:41:39.365961 1064794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:41:39.366635 1064794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:41:39.369209 1064794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:41:39.372794 1064794 out.go:252]   - Booting up control plane ...
	I1210 07:41:39.372894 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:41:39.372969 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:41:39.374027 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:41:39.390260 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:41:39.390694 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:41:39.397555 1064794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:41:39.397883 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:41:39.397929 1064794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:41:39.536450 1064794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:41:39.536565 1064794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:45:23.105552 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000403225s
	I1210 07:45:23.105596 1061272 kubeadm.go:319] 
	I1210 07:45:23.105659 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:23.105695 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:23.105810 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:23.105817 1061272 kubeadm.go:319] 
	I1210 07:45:23.105931 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:23.105968 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:23.106003 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:23.106008 1061272 kubeadm.go:319] 
	I1210 07:45:23.110089 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.110529 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.110638 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:23.110873 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:23.110878 1061272 kubeadm.go:319] 
	I1210 07:45:23.110946 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:23.111048 1061272 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-587009] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000403225s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:23.111129 1061272 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:23.528980 1061272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:23.543064 1061272 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:23.543133 1061272 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:23.552680 1061272 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:23.552702 1061272 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:23.552757 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:23.561132 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:23.561196 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:23.569220 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:23.577552 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:23.577617 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:23.585736 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.594195 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:23.594261 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:23.602367 1061272 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:23.610802 1061272 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:23.610868 1061272 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:23.618934 1061272 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:23.738244 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:23.738666 1061272 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:23.820302 1061272 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.537616 1064794 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001129575s
	I1210 07:45:39.537650 1064794 kubeadm.go:319] 
	I1210 07:45:39.537709 1064794 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:45:39.537747 1064794 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:45:39.537857 1064794 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:45:39.537866 1064794 kubeadm.go:319] 
	I1210 07:45:39.537971 1064794 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:45:39.538008 1064794 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:45:39.538043 1064794 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:45:39.538052 1064794 kubeadm.go:319] 
	I1210 07:45:39.542860 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:39.543622 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:39.543819 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:45:39.544243 1064794 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:45:39.544260 1064794 kubeadm.go:319] 
	I1210 07:45:39.544379 1064794 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:45:39.544512 1064794 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-237317] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129575s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:45:39.544665 1064794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:45:40.003717 1064794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:45:40.026427 1064794 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:45:40.026565 1064794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:45:40.036588 1064794 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:45:40.036615 1064794 kubeadm.go:158] found existing configuration files:
	
	I1210 07:45:40.036678 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:45:40.045938 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:45:40.046015 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:45:40.054590 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:45:40.063126 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:45:40.063204 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:45:40.071408 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.079679 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:45:40.079771 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:45:40.088102 1064794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:45:40.097134 1064794 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:45:40.097216 1064794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:45:40.105436 1064794 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:45:40.222290 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:45:40.222807 1064794 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:45:40.298915 1064794 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:49:26.015309 1061272 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:49:26.015352 1061272 kubeadm.go:319] 
	I1210 07:49:26.015478 1061272 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:49:26.021506 1061272 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:49:26.021573 1061272 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:49:26.021669 1061272 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:49:26.021735 1061272 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:49:26.021780 1061272 kubeadm.go:319] OS: Linux
	I1210 07:49:26.021833 1061272 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:49:26.021898 1061272 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:49:26.021954 1061272 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:49:26.022012 1061272 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:49:26.022072 1061272 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:49:26.022130 1061272 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:49:26.022183 1061272 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:49:26.022239 1061272 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:49:26.022294 1061272 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:49:26.022377 1061272 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:49:26.022520 1061272 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:49:26.022665 1061272 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:49:26.022797 1061272 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:49:26.025625 1061272 out.go:252]   - Generating certificates and keys ...
	I1210 07:49:26.025738 1061272 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:49:26.025820 1061272 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:49:26.025909 1061272 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:49:26.025981 1061272 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:49:26.026084 1061272 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:49:26.026145 1061272 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:49:26.026218 1061272 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:49:26.026288 1061272 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:49:26.026372 1061272 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:49:26.026456 1061272 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:49:26.026527 1061272 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:49:26.026596 1061272 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:49:26.026658 1061272 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:49:26.026731 1061272 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:49:26.026814 1061272 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:49:26.026910 1061272 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:49:26.027000 1061272 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:49:26.027123 1061272 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:49:26.027217 1061272 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:49:26.032204 1061272 out.go:252]   - Booting up control plane ...
	I1210 07:49:26.032327 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:49:26.032449 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:49:26.032535 1061272 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:49:26.032660 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:49:26.032760 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:49:26.032871 1061272 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:49:26.032963 1061272 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:49:26.033008 1061272 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:49:26.033144 1061272 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:49:26.033252 1061272 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:49:26.033319 1061272 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00018658s
	I1210 07:49:26.033356 1061272 kubeadm.go:319] 
	I1210 07:49:26.033430 1061272 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:49:26.033471 1061272 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:49:26.033578 1061272 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:49:26.033591 1061272 kubeadm.go:319] 
	I1210 07:49:26.033695 1061272 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:49:26.033732 1061272 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:49:26.033765 1061272 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:49:26.033838 1061272 kubeadm.go:403] duration metric: took 8m9.047256448s to StartCluster
	I1210 07:49:26.033878 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:49:26.033967 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:49:26.034180 1061272 kubeadm.go:319] 
	I1210 07:49:26.078012 1061272 cri.go:89] found id: ""
	I1210 07:49:26.078053 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.078063 1061272 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:49:26.078088 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:49:26.078174 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:49:26.106609 1061272 cri.go:89] found id: ""
	I1210 07:49:26.106637 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.106653 1061272 logs.go:284] No container was found matching "etcd"
	I1210 07:49:26.106660 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:49:26.106763 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:49:26.132553 1061272 cri.go:89] found id: ""
	I1210 07:49:26.132579 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.132589 1061272 logs.go:284] No container was found matching "coredns"
	I1210 07:49:26.132595 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:49:26.132657 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:49:26.159729 1061272 cri.go:89] found id: ""
	I1210 07:49:26.159779 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.159789 1061272 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:49:26.159797 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:49:26.159864 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:49:26.185308 1061272 cri.go:89] found id: ""
	I1210 07:49:26.185386 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.185409 1061272 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:49:26.185430 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:49:26.185524 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:49:26.210297 1061272 cri.go:89] found id: ""
	I1210 07:49:26.210364 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.210388 1061272 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:49:26.210409 1061272 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:49:26.210538 1061272 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:49:26.235247 1061272 cri.go:89] found id: ""
	I1210 07:49:26.235320 1061272 logs.go:282] 0 containers: []
	W1210 07:49:26.235341 1061272 logs.go:284] No container was found matching "kindnet"
	I1210 07:49:26.235352 1061272 logs.go:123] Gathering logs for kubelet ...
	I1210 07:49:26.235364 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:49:26.292545 1061272 logs.go:123] Gathering logs for dmesg ...
	I1210 07:49:26.292580 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:49:26.309666 1061272 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:49:26.309695 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:49:26.371886 1061272 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:49:26.363797    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.364463    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366186    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.366720    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:26.368404    5411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:49:26.371909 1061272 logs.go:123] Gathering logs for containerd ...
	I1210 07:49:26.371922 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:49:26.414122 1061272 logs.go:123] Gathering logs for container status ...
	I1210 07:49:26.414158 1061272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:49:26.443108 1061272 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:49:26.443165 1061272 out.go:285] * 
	W1210 07:49:26.443224 1061272 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.443242 1061272 out.go:285] * 
	W1210 07:49:26.445452 1061272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:49:26.452172 1061272 out.go:203] 
	W1210 07:49:26.455094 1061272 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00018658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:26.455136 1061272 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:49:26.455159 1061272 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:49:26.458257 1061272 out.go:203] 
	I1210 07:49:42.193287 1064794 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:49:42.193319 1064794 kubeadm.go:319] 
	I1210 07:49:42.193391 1064794 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:49:42.203786 1064794 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:49:42.203866 1064794 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:49:42.203970 1064794 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:49:42.204031 1064794 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:49:42.204076 1064794 kubeadm.go:319] OS: Linux
	I1210 07:49:42.204124 1064794 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:49:42.204177 1064794 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:49:42.204229 1064794 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:49:42.204282 1064794 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:49:42.204335 1064794 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:49:42.204389 1064794 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:49:42.204441 1064794 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:49:42.204493 1064794 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:49:42.204543 1064794 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:49:42.204619 1064794 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:49:42.204719 1064794 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:49:42.204814 1064794 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:49:42.204881 1064794 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:49:42.208050 1064794 out.go:252]   - Generating certificates and keys ...
	I1210 07:49:42.208163 1064794 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:49:42.208281 1064794 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:49:42.208377 1064794 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:49:42.208439 1064794 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:49:42.208528 1064794 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:49:42.208589 1064794 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:49:42.208676 1064794 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:49:42.208750 1064794 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:49:42.208862 1064794 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:49:42.208970 1064794 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:49:42.209024 1064794 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:49:42.209111 1064794 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:49:42.209168 1064794 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:49:42.209240 1064794 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:49:42.209310 1064794 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:49:42.209381 1064794 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:49:42.209443 1064794 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:49:42.209538 1064794 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:49:42.209611 1064794 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:49:42.212530 1064794 out.go:252]   - Booting up control plane ...
	I1210 07:49:42.212677 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:49:42.212801 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:49:42.212895 1064794 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:49:42.213029 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:49:42.213133 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:49:42.213240 1064794 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:49:42.213324 1064794 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:49:42.213364 1064794 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:49:42.213496 1064794 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:49:42.213644 1064794 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:49:42.213727 1064794 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061258s
	I1210 07:49:42.213738 1064794 kubeadm.go:319] 
	I1210 07:49:42.213804 1064794 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:49:42.213856 1064794 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:49:42.213977 1064794 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:49:42.213986 1064794 kubeadm.go:319] 
	I1210 07:49:42.214091 1064794 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:49:42.214141 1064794 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:49:42.214197 1064794 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:49:42.214296 1064794 kubeadm.go:403] duration metric: took 8m6.317915618s to StartCluster
	I1210 07:49:42.214312 1064794 kubeadm.go:319] 
	I1210 07:49:42.214353 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:49:42.214424 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:49:42.249563 1064794 cri.go:89] found id: ""
	I1210 07:49:42.249601 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.249610 1064794 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:49:42.249616 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:49:42.249684 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:49:42.277505 1064794 cri.go:89] found id: ""
	I1210 07:49:42.277532 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.277542 1064794 logs.go:284] No container was found matching "etcd"
	I1210 07:49:42.277549 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:49:42.277621 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:49:42.315271 1064794 cri.go:89] found id: ""
	I1210 07:49:42.315293 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.315301 1064794 logs.go:284] No container was found matching "coredns"
	I1210 07:49:42.315308 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:49:42.315372 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:49:42.345026 1064794 cri.go:89] found id: ""
	I1210 07:49:42.345048 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.345059 1064794 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:49:42.345066 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:49:42.345129 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:49:42.373644 1064794 cri.go:89] found id: ""
	I1210 07:49:42.373666 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.373675 1064794 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:49:42.373683 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:49:42.373745 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:49:42.400575 1064794 cri.go:89] found id: ""
	I1210 07:49:42.400601 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.400611 1064794 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:49:42.400617 1064794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:49:42.400696 1064794 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:49:42.427038 1064794 cri.go:89] found id: ""
	I1210 07:49:42.427115 1064794 logs.go:282] 0 containers: []
	W1210 07:49:42.427139 1064794 logs.go:284] No container was found matching "kindnet"
	I1210 07:49:42.427159 1064794 logs.go:123] Gathering logs for containerd ...
	I1210 07:49:42.427171 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:49:42.467853 1064794 logs.go:123] Gathering logs for container status ...
	I1210 07:49:42.467891 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:49:42.498107 1064794 logs.go:123] Gathering logs for kubelet ...
	I1210 07:49:42.498136 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:49:42.557296 1064794 logs.go:123] Gathering logs for dmesg ...
	I1210 07:49:42.557339 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:49:42.581299 1064794 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:49:42.581326 1064794 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:49:42.659525 1064794 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:49:42.650006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651835    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.653583    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.654334    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:49:42.650006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651006    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.651835    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.653583    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:49:42.654334    4786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:49:42.659549 1064794 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:49:42.659589 1064794 out.go:285] * 
	W1210 07:49:42.659647 1064794 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:42.659666 1064794 out.go:285] * 
	W1210 07:49:42.661902 1064794 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:49:42.667670 1064794 out.go:203] 
	W1210 07:49:42.671449 1064794 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:49:42.671502 1064794 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:49:42.671524 1064794 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:49:42.674621 1064794 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:41:06 no-preload-587009 containerd[758]: time="2025-12-10T07:41:06.789083088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.284850821Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.287114833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.296151055Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:08 no-preload-587009 containerd[758]: time="2025-12-10T07:41:08.297726981Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.295008191Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.297291871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.305236801Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:09 no-preload-587009 containerd[758]: time="2025-12-10T07:41:09.313440846Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.490269450Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.493135235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.503850918Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:10 no-preload-587009 containerd[758]: time="2025-12-10T07:41:10.504417343Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.559054031Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.561269122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.569663283Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:11 no-preload-587009 containerd[758]: time="2025-12-10T07:41:11.570266705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.618033993Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.620356878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.629513282Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:12 no-preload-587009 containerd[758]: time="2025-12-10T07:41:12.630204657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.276669096Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.278998807Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.285987103Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 07:41:13 no-preload-587009 containerd[758]: time="2025-12-10T07:41:13.286306090Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:08.437111    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:51:08.437749    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:51:08.439421    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:51:08.440062    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:51:08.441672    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:51:08 up  6:33,  0 user,  load average: 0.21, 0.81, 1.56
	Linux no-preload-587009 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:51:05 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:51:05 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 453.
	Dec 10 07:51:05 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:05 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:05 no-preload-587009 kubelet[6632]: E1210 07:51:05.845120    6632 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:51:05 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:51:05 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:51:06 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 454.
	Dec 10 07:51:06 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:06 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:06 no-preload-587009 kubelet[6637]: E1210 07:51:06.593314    6637 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:51:06 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:51:06 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:51:07 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 455.
	Dec 10 07:51:07 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:07 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:07 no-preload-587009 kubelet[6648]: E1210 07:51:07.345150    6648 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:51:07 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:51:07 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:51:08 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 456.
	Dec 10 07:51:08 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:08 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:08 no-preload-587009 kubelet[6671]: E1210 07:51:08.104584    6671 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:51:08 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:51:08 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009: exit status 6 (329.738422ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:51:08.936884 1077047 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-587009" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (97.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (88.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-237317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1210 07:49:44.252825  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:50:14.423871  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:50:24.251003  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:50:38.858671  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-237317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m27.138317221s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-237317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-237317
helpers_test.go:244: (dbg) docker inspect newest-cni-237317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	        "Created": "2025-12-10T07:41:27.764165056Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1065238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:41:27.828515523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hosts",
	        "LogPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d-json.log",
	        "Name": "/newest-cni-237317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-237317:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-237317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	                "LowerDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-237317",
	                "Source": "/var/lib/docker/volumes/newest-cni-237317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-237317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-237317",
	                "name.minikube.sigs.k8s.io": "newest-cni-237317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "082222785b25cb507d74041ac4c00d1d74bffe5ab668e3fe904c3260bea97985",
	            "SandboxKey": "/var/run/docker/netns/082222785b25",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33839"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-237317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:54:e3:f6:e2:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8181aebce826300f2c9eb8f48208470a68f1816a212863fa9c220fbbaa29953b",
	                    "EndpointID": "bccbc4d36a210938307e473b6bf375481b7f47c4af07021cfaeeb28874de79dc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-237317",
	                        "a3bfe8c2955a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317: exit status 6 (384.993998ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:51:11.773161 1077831 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-237317" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-237317 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p default-k8s-diff-port-444518 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ start   │ -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p embed-certs-254586 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-254586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-237317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ stop    │ -p no-preload-587009 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ addons  │ enable dashboard -p no-preload-587009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:51:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:51:10.500767 1077343 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:51:10.500906 1077343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:10.500918 1077343 out.go:374] Setting ErrFile to fd 2...
	I1210 07:51:10.500925 1077343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:10.501188 1077343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:51:10.501563 1077343 out.go:368] Setting JSON to false
	I1210 07:51:10.502565 1077343 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23595,"bootTime":1765329476,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:51:10.502637 1077343 start.go:143] virtualization:  
	I1210 07:51:10.505863 1077343 out.go:179] * [no-preload-587009] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:51:10.509712 1077343 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:51:10.509860 1077343 notify.go:221] Checking for updates...
	I1210 07:51:10.515820 1077343 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:51:10.518876 1077343 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:10.521864 1077343 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:51:10.524801 1077343 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:51:10.527730 1077343 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:51:10.531152 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:10.531739 1077343 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:51:10.571297 1077343 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:51:10.571458 1077343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:10.628105 1077343 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:51:10.617928902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:10.628225 1077343 docker.go:319] overlay module found
	I1210 07:51:10.631349 1077343 out.go:179] * Using the docker driver based on existing profile
	I1210 07:51:10.634203 1077343 start.go:309] selected driver: docker
	I1210 07:51:10.634227 1077343 start.go:927] validating driver "docker" against &{Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:10.634329 1077343 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:51:10.635191 1077343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:10.689538 1077343 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:51:10.680270661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:10.689877 1077343 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:51:10.689910 1077343 cni.go:84] Creating CNI manager for ""
	I1210 07:51:10.689965 1077343 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:10.690007 1077343 start.go:353] cluster config:
	{Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:10.693172 1077343 out.go:179] * Starting "no-preload-587009" primary control-plane node in "no-preload-587009" cluster
	I1210 07:51:10.696031 1077343 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:51:10.698925 1077343 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:51:10.701796 1077343 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:10.701893 1077343 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:51:10.701941 1077343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json ...
	I1210 07:51:10.702278 1077343 cache.go:107] acquiring lock: {Name:mkabea6e7b1e77c374f63c9a4d0766be00cc6317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702376 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:51:10.702391 1077343 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.411µs
	I1210 07:51:10.702414 1077343 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:51:10.702433 1077343 cache.go:107] acquiring lock: {Name:mk64f56a3ea6b87518d3bc512eef54d76035bb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702503 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1210 07:51:10.702516 1077343 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 84.062µs
	I1210 07:51:10.702523 1077343 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1210 07:51:10.702535 1077343 cache.go:107] acquiring lock: {Name:mkc3e57bbe80791d398050e8951aea73d362d920 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702572 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1210 07:51:10.702589 1077343 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 47.894µs
	I1210 07:51:10.702601 1077343 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1210 07:51:10.702613 1077343 cache.go:107] acquiring lock: {Name:mkb61a80f7472bdfd6bbc597d8ce9f0afe659105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702647 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1210 07:51:10.702657 1077343 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 45.113µs
	I1210 07:51:10.702664 1077343 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1210 07:51:10.702674 1077343 cache.go:107] acquiring lock: {Name:mk88572bf90913c057455c882907a6c4416350fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702700 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1210 07:51:10.702709 1077343 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 37.44µs
	I1210 07:51:10.702716 1077343 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1210 07:51:10.702724 1077343 cache.go:107] acquiring lock: {Name:mk9279f9c659c863cac5b3805141cb5f659d3427 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702756 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:51:10.702765 1077343 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.396µs
	I1210 07:51:10.702778 1077343 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:51:10.702792 1077343 cache.go:107] acquiring lock: {Name:mk89d503b38bf82fa0b7406e77e02d931662720f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702820 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 07:51:10.702830 1077343 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 39.353µs
	I1210 07:51:10.702836 1077343 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 07:51:10.702845 1077343 cache.go:107] acquiring lock: {Name:mkde71767452c33eccd8ae2cb3e7952dfc30e95a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702876 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:51:10.702885 1077343 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 40.813µs
	I1210 07:51:10.702891 1077343 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:51:10.702897 1077343 cache.go:87] Successfully saved all images to host disk.
	I1210 07:51:10.721758 1077343 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:51:10.721782 1077343 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:51:10.721797 1077343 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:51:10.721830 1077343 start.go:360] acquireMachinesLock for no-preload-587009: {Name:mk024fb9ab341e7f6dd2192e8a4fa44e5bf27c0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.721889 1077343 start.go:364] duration metric: took 38.934µs to acquireMachinesLock for "no-preload-587009"
	I1210 07:51:10.721913 1077343 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:51:10.721919 1077343 fix.go:54] fixHost starting: 
	I1210 07:51:10.722189 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:10.738987 1077343 fix.go:112] recreateIfNeeded on no-preload-587009: state=Stopped err=<nil>
	W1210 07:51:10.739020 1077343 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189451530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189523465Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189620993Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189699829Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189762329Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189821054Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189877891Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.189950344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.190019227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.190109386Z" level=info msg="Connect containerd service"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.190549139Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.191210130Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.204351589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.204433822Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.205109853Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.205184234Z" level=info msg="Start recovering state"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.248948445Z" level=info msg="Start event monitor"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249142024Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249206008Z" level=info msg="Start streaming server"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249282998Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249338737Z" level=info msg="runtime interface starting up..."
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249390947Z" level=info msg="starting plugins..."
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.249453495Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:41:34 newest-cni-237317 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:41:34 newest-cni-237317 containerd[757]: time="2025-12-10T07:41:34.250564996Z" level=info msg="containerd successfully booted in 0.087383s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:12.376688    5778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:51:12.377272    5778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:51:12.378924    5778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:51:12.379365    5778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:51:12.381024    5778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:51:12 up  6:33,  0 user,  load average: 0.19, 0.80, 1.55
	Linux newest-cni-237317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:51:09 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 437.
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:10 newest-cni-237317 kubelet[5656]: E1210 07:51:10.092217    5656 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 438.
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:10 newest-cni-237317 kubelet[5662]: E1210 07:51:10.879216    5662 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:51:10 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:51:11 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 439.
	Dec 10 07:51:11 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:11 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:11 newest-cni-237317 kubelet[5673]: E1210 07:51:11.618985    5673 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:51:11 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:51:11 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:51:12 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 440.
	Dec 10 07:51:12 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:12 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:51:12 newest-cni-237317 kubelet[5771]: E1210 07:51:12.361342    5771 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:51:12 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:51:12 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317: exit status 6 (395.968405ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:51:12.864766 1078086 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-237317" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-237317" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (88.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (373.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.622808368s)

                                                
                                                
-- stdout --
	* [no-preload-587009] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-587009" primary control-plane node in "no-preload-587009" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:51:10.500767 1077343 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:51:10.500906 1077343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:10.500918 1077343 out.go:374] Setting ErrFile to fd 2...
	I1210 07:51:10.500925 1077343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:10.501188 1077343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:51:10.501563 1077343 out.go:368] Setting JSON to false
	I1210 07:51:10.502565 1077343 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23595,"bootTime":1765329476,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:51:10.502637 1077343 start.go:143] virtualization:  
	I1210 07:51:10.505863 1077343 out.go:179] * [no-preload-587009] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:51:10.509712 1077343 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:51:10.509860 1077343 notify.go:221] Checking for updates...
	I1210 07:51:10.515820 1077343 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:51:10.518876 1077343 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:10.521864 1077343 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:51:10.524801 1077343 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:51:10.527730 1077343 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:51:10.531152 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:10.531739 1077343 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:51:10.571297 1077343 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:51:10.571458 1077343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:10.628105 1077343 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:51:10.617928902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:10.628225 1077343 docker.go:319] overlay module found
	I1210 07:51:10.631349 1077343 out.go:179] * Using the docker driver based on existing profile
	I1210 07:51:10.634203 1077343 start.go:309] selected driver: docker
	I1210 07:51:10.634227 1077343 start.go:927] validating driver "docker" against &{Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:10.634329 1077343 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:51:10.635191 1077343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:10.689538 1077343 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:51:10.680270661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:10.689877 1077343 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:51:10.689910 1077343 cni.go:84] Creating CNI manager for ""
	I1210 07:51:10.689965 1077343 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:10.690007 1077343 start.go:353] cluster config:
	{Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:10.693172 1077343 out.go:179] * Starting "no-preload-587009" primary control-plane node in "no-preload-587009" cluster
	I1210 07:51:10.696031 1077343 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:51:10.698925 1077343 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:51:10.701796 1077343 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:10.701893 1077343 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:51:10.701941 1077343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json ...
	I1210 07:51:10.702278 1077343 cache.go:107] acquiring lock: {Name:mkabea6e7b1e77c374f63c9a4d0766be00cc6317 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702376 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:51:10.702391 1077343 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.411µs
	I1210 07:51:10.702414 1077343 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:51:10.702433 1077343 cache.go:107] acquiring lock: {Name:mk64f56a3ea6b87518d3bc512eef54d76035bb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702503 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1210 07:51:10.702516 1077343 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 84.062µs
	I1210 07:51:10.702523 1077343 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1210 07:51:10.702535 1077343 cache.go:107] acquiring lock: {Name:mkc3e57bbe80791d398050e8951aea73d362d920 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702572 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1210 07:51:10.702589 1077343 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 47.894µs
	I1210 07:51:10.702601 1077343 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1210 07:51:10.702613 1077343 cache.go:107] acquiring lock: {Name:mkb61a80f7472bdfd6bbc597d8ce9f0afe659105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702647 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1210 07:51:10.702657 1077343 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 45.113µs
	I1210 07:51:10.702664 1077343 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1210 07:51:10.702674 1077343 cache.go:107] acquiring lock: {Name:mk88572bf90913c057455c882907a6c4416350fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702700 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1210 07:51:10.702709 1077343 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 37.44µs
	I1210 07:51:10.702716 1077343 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1210 07:51:10.702724 1077343 cache.go:107] acquiring lock: {Name:mk9279f9c659c863cac5b3805141cb5f659d3427 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702756 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:51:10.702765 1077343 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.396µs
	I1210 07:51:10.702778 1077343 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:51:10.702792 1077343 cache.go:107] acquiring lock: {Name:mk89d503b38bf82fa0b7406e77e02d931662720f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702820 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 07:51:10.702830 1077343 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 39.353µs
	I1210 07:51:10.702836 1077343 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 07:51:10.702845 1077343 cache.go:107] acquiring lock: {Name:mkde71767452c33eccd8ae2cb3e7952dfc30e95a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.702876 1077343 cache.go:115] /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:51:10.702885 1077343 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 40.813µs
	I1210 07:51:10.702891 1077343 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:51:10.702897 1077343 cache.go:87] Successfully saved all images to host disk.
	I1210 07:51:10.721758 1077343 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:51:10.721782 1077343 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:51:10.721797 1077343 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:51:10.721830 1077343 start.go:360] acquireMachinesLock for no-preload-587009: {Name:mk024fb9ab341e7f6dd2192e8a4fa44e5bf27c0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:10.721889 1077343 start.go:364] duration metric: took 38.934µs to acquireMachinesLock for "no-preload-587009"
	I1210 07:51:10.721913 1077343 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:51:10.721919 1077343 fix.go:54] fixHost starting: 
	I1210 07:51:10.722189 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:10.738987 1077343 fix.go:112] recreateIfNeeded on no-preload-587009: state=Stopped err=<nil>
	W1210 07:51:10.739020 1077343 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:51:10.742298 1077343 out.go:252] * Restarting existing docker container for "no-preload-587009" ...
	I1210 07:51:10.742407 1077343 cli_runner.go:164] Run: docker start no-preload-587009
	I1210 07:51:11.039727 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:11.064793 1077343 kic.go:430] container "no-preload-587009" state is running.
	I1210 07:51:11.065794 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:11.090953 1077343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json ...
	I1210 07:51:11.091180 1077343 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:11.091248 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:11.118540 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:11.118875 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:11.118891 1077343 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:11.119530 1077343 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38028->127.0.0.1:33840: read: connection reset by peer
	I1210 07:51:14.269979 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.270011 1077343 ubuntu.go:182] provisioning hostname "no-preload-587009"
	I1210 07:51:14.270115 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.295536 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.295890 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.295901 1077343 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-587009 && echo "no-preload-587009" | sudo tee /etc/hostname
	I1210 07:51:14.452920 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.453011 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.478828 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.479134 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.479150 1077343 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-587009' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-587009/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-587009' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:14.626210 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:14.626250 1077343 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:14.626279 1077343 ubuntu.go:190] setting up certificates
	I1210 07:51:14.626296 1077343 provision.go:84] configureAuth start
	I1210 07:51:14.626367 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:14.653396 1077343 provision.go:143] copyHostCerts
	I1210 07:51:14.653479 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:14.653501 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:14.653585 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:14.653695 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:14.653708 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:14.653739 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:14.653813 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:14.653823 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:14.653849 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:14.653913 1077343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.no-preload-587009 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-587009]
	I1210 07:51:14.987883 1077343 provision.go:177] copyRemoteCerts
	I1210 07:51:14.987956 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:14.988006 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.016190 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.122129 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:15.168648 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:15.209293 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:15.238881 1077343 provision.go:87] duration metric: took 612.568009ms to configureAuth
	I1210 07:51:15.238905 1077343 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:15.239106 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:15.239113 1077343 machine.go:97] duration metric: took 4.147925818s to provisionDockerMachine
	I1210 07:51:15.239121 1077343 start.go:293] postStartSetup for "no-preload-587009" (driver="docker")
	I1210 07:51:15.239133 1077343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:15.239186 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:15.239227 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.259116 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.370554 1077343 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:15.375386 1077343 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:15.375413 1077343 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:15.375424 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:15.375477 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:15.375560 1077343 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:15.375669 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:15.386817 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:15.415888 1077343 start.go:296] duration metric: took 176.733864ms for postStartSetup
	I1210 07:51:15.416018 1077343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:15.416065 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.439058 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.548495 1077343 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:15.553596 1077343 fix.go:56] duration metric: took 4.831668845s for fixHost
	I1210 07:51:15.553633 1077343 start.go:83] releasing machines lock for "no-preload-587009", held for 4.831730515s
	I1210 07:51:15.553722 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:15.586973 1077343 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:15.587034 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.587329 1077343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:15.587396 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.629146 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.634697 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.746290 1077343 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:15.838801 1077343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:15.843040 1077343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:15.843111 1077343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:15.851174 1077343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:15.851245 1077343 start.go:496] detecting cgroup driver to use...
	I1210 07:51:15.851294 1077343 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:15.851351 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:15.869860 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:15.883702 1077343 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:15.883777 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:15.899664 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:15.913011 1077343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:16.034801 1077343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:16.150617 1077343 docker.go:234] disabling docker service ...
	I1210 07:51:16.150759 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:16.165840 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:16.180309 1077343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:16.307789 1077343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:16.432072 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:16.444962 1077343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:16.459040 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:16.467874 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:16.476775 1077343 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:16.476842 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:16.485489 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.494113 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:16.502936 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.511763 1077343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:16.519893 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:16.528779 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:16.537342 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:16.546138 1077343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:16.553912 1077343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:16.561714 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:16.748597 1077343 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:16.865266 1077343 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:16.865408 1077343 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:16.869450 1077343 start.go:564] Will wait 60s for crictl version
	I1210 07:51:16.869562 1077343 ssh_runner.go:195] Run: which crictl
	I1210 07:51:16.873018 1077343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:16.900099 1077343 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:16.900218 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.923700 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.947379 1077343 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:16.950227 1077343 cli_runner.go:164] Run: docker network inspect no-preload-587009 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:16.965229 1077343 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:16.969175 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:16.978619 1077343 kubeadm.go:884] updating cluster {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:16.978743 1077343 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:16.978798 1077343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:17.014301 1077343 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:17.014333 1077343 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:17.014341 1077343 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:17.014532 1077343 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-587009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:17.014625 1077343 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:17.044039 1077343 cni.go:84] Creating CNI manager for ""
	I1210 07:51:17.044060 1077343 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:17.044082 1077343 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:51:17.044104 1077343 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-587009 NodeName:no-preload-587009 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:17.044222 1077343 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-587009"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:17.044289 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:17.052024 1077343 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:17.052101 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:17.059722 1077343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:17.072494 1077343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:17.086253 1077343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 07:51:17.099376 1077343 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:17.102883 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:17.112330 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:17.225530 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:17.246996 1077343 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009 for IP: 192.168.85.2
	I1210 07:51:17.247021 1077343 certs.go:195] generating shared ca certs ...
	I1210 07:51:17.247038 1077343 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.247186 1077343 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:17.247238 1077343 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:17.247248 1077343 certs.go:257] generating profile certs ...
	I1210 07:51:17.247347 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.key
	I1210 07:51:17.247407 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key.841ee17a
	I1210 07:51:17.247454 1077343 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key
	I1210 07:51:17.247566 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:17.247604 1077343 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:17.247617 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:17.247646 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:17.247674 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:17.247712 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:17.247768 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:17.248384 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:17.265969 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:17.284190 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:17.302881 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:17.324073 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:17.341990 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:51:17.359614 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:17.377843 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:51:17.395426 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:17.413039 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:17.430522 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:17.447821 1077343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:17.460777 1077343 ssh_runner.go:195] Run: openssl version
	I1210 07:51:17.467243 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.474706 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:17.482273 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.485950 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.486025 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.526902 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:17.534224 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.541448 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:17.549037 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552765 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552832 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.595755 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:17.603128 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.610926 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:17.618981 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622497 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622563 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.663609 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:17.670957 1077343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:17.674676 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:17.715746 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:17.758195 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:17.799081 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:17.840047 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:17.880964 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:17.921878 1077343 kubeadm.go:401] StartCluster: {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:17.921988 1077343 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:17.922092 1077343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:17.951649 1077343 cri.go:89] found id: ""
	I1210 07:51:17.951796 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:17.959534 1077343 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:17.959555 1077343 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:17.959635 1077343 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:17.966920 1077343 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:17.967331 1077343 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.967425 1077343 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-587009" cluster setting kubeconfig missing "no-preload-587009" context setting]
	I1210 07:51:17.967687 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.968903 1077343 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:17.977669 1077343 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:51:17.977707 1077343 kubeadm.go:602] duration metric: took 18.146766ms to restartPrimaryControlPlane
	I1210 07:51:17.977718 1077343 kubeadm.go:403] duration metric: took 55.849318ms to StartCluster
	I1210 07:51:17.977733 1077343 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.977796 1077343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.978427 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.978652 1077343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:17.978958 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:17.979006 1077343 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:17.979072 1077343 addons.go:70] Setting storage-provisioner=true in profile "no-preload-587009"
	I1210 07:51:17.979085 1077343 addons.go:239] Setting addon storage-provisioner=true in "no-preload-587009"
	I1210 07:51:17.979106 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979123 1077343 addons.go:70] Setting dashboard=true in profile "no-preload-587009"
	I1210 07:51:17.979139 1077343 addons.go:239] Setting addon dashboard=true in "no-preload-587009"
	W1210 07:51:17.979155 1077343 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:17.979179 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979564 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.979606 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.982091 1077343 addons.go:70] Setting default-storageclass=true in profile "no-preload-587009"
	I1210 07:51:17.982247 1077343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-587009"
	I1210 07:51:17.983173 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.984528 1077343 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:17.987357 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:18.030694 1077343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:18.030828 1077343 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:18.034622 1077343 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:18.034780 1077343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.034793 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:18.034874 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.037543 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:18.037568 1077343 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:18.037639 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.041604 1077343 addons.go:239] Setting addon default-storageclass=true in "no-preload-587009"
	I1210 07:51:18.041645 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:18.042060 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:18.105147 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.114730 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.115497 1077343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.115511 1077343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:18.115563 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.135449 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.230094 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:18.264441 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.283658 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:18.283729 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:18.329062 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:18.329133 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:18.353549 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:18.353629 1077343 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:18.357622 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.376127 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:18.376202 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:18.447999 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:18.448021 1077343 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:18.470186 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:18.470208 1077343 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:18.489233 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:18.489255 1077343 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:18.503805 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:18.503828 1077343 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:18.521545 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:18.521566 1077343 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:18.536611 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.053453 1077343 node_ready.go:35] waiting up to 6m0s for node "no-preload-587009" to be "Ready" ...
	W1210 07:51:19.053800 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053834 1077343 retry.go:31] will retry after 261.467752ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.053883 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053894 1077343 retry.go:31] will retry after 368.94912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.054089 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.054104 1077343 retry.go:31] will retry after 338.426434ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.315446 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.382015 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.382044 1077343 retry.go:31] will retry after 337.060159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.393358 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.424101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:19.491743 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.491780 1077343 retry.go:31] will retry after 471.881278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.538786 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.538838 1077343 retry.go:31] will retry after 528.879721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.719721 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.790713 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.790742 1077343 retry.go:31] will retry after 510.29035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.964160 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:20.068233 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:20.070746 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.070792 1077343 retry.go:31] will retry after 543.265245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.148457 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.148492 1077343 retry.go:31] will retry after 460.630823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.301882 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:20.397427 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.397476 1077343 retry.go:31] will retry after 801.303312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.609734 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:20.615162 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:20.763154 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763202 1077343 retry.go:31] will retry after 629.698549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.763322 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763340 1077343 retry.go:31] will retry after 624.408887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.054168 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:21.199599 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:21.288128 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.288156 1077343 retry.go:31] will retry after 1.429543278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.388486 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:21.393905 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:21.513396 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.513426 1077343 retry.go:31] will retry after 1.363983036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.522339 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.522370 1077343 retry.go:31] will retry after 1.881789089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.718226 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:22.784732 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.784765 1077343 retry.go:31] will retry after 2.14784628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.877998 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.948118 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.948146 1077343 retry.go:31] will retry after 2.832610868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:23.404396 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.467879 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.467914 1077343 retry.go:31] will retry after 2.135960827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.933362 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.999854 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.999895 1077343 retry.go:31] will retry after 3.6382738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:25.554994 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:25.604337 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:25.669224 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.669262 1077343 retry.go:31] will retry after 2.194006804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.781321 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.929708 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.929740 1077343 retry.go:31] will retry after 3.276039002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.863966 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.927673 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.927709 1077343 retry.go:31] will retry after 5.303571514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.054575 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:28.639292 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:28.698653 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.698686 1077343 retry.go:31] will retry after 3.005783671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.206806 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:29.264930 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.264960 1077343 retry.go:31] will retry after 2.489245949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:30.554528 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:31.705403 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:31.754983 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:31.764053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.764088 1077343 retry.go:31] will retry after 6.263299309s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:31.824900 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.824937 1077343 retry.go:31] will retry after 8.063912103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:32.554572 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:33.232049 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:33.291801 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:33.291838 1077343 retry.go:31] will retry after 5.361341065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:34.554757 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:37.053891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:38.027881 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:38.116733 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.116768 1077343 retry.go:31] will retry after 12.105620641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.653613 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:38.715053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.715087 1077343 retry.go:31] will retry after 11.375750542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:39.554885 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:39.889521 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:39.947993 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.948032 1077343 retry.go:31] will retry after 6.34767532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:42.054758 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:44.554149 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:46.296554 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:46.375385 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.375418 1077343 retry.go:31] will retry after 17.860418691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:47.054540 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:49.054867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:50.091584 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:50.153219 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.153253 1077343 retry.go:31] will retry after 15.008999648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.223406 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:50.279259 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.279296 1077343 retry.go:31] will retry after 9.416080018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:51.553954 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:56.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:58.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:59.696250 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:59.757338 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:59.757373 1077343 retry.go:31] will retry after 26.778697297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:01.054130 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:03.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:04.236888 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:04.303052 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:04.303083 1077343 retry.go:31] will retry after 25.859676141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.163286 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.227326 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.227361 1077343 retry.go:31] will retry after 29.528693098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:06.053981 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:08.554858 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:11.053980 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:13.054863 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:15.055109 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:17.055513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:19.553887 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:21.554922 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:24.054424 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:26.536333 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:26.554155 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:26.621759 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:26.621788 1077343 retry.go:31] will retry after 32.881374862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:29.054917 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:30.163626 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:30.226039 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:30.226073 1077343 retry.go:31] will retry after 27.175178767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:31.554771 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:34.054729 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:34.756990 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:34.831836 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:34.831956 1077343 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:52:36.554875 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:39.054027 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:41.054361 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:43.054892 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:45.554347 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:47.554702 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:50.054996 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:52.555048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:55.054639 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:57.402101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:57.460754 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:57.460865 1077343 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:52:57.554262 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:59.503589 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:59.554549 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:59.576553 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:59.576655 1077343 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:59.579701 1077343 out.go:179] * Enabled addons: 
	I1210 07:52:59.582536 1077343 addons.go:530] duration metric: took 1m41.60352286s for enable addons: enabled=[]
	W1210 07:53:02.054010 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:04.555038 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:07.053986 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:09.554520 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:11.554633 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:14.054508 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:16.054718 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:18.554815 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:21.054852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:23.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:26.054246 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:28.554092 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:30.554186 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:33.054061 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:35.054801 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:37.554212 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:40.054014 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:42.054513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:44.553993 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:46.554777 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:49.054179 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:51.054819 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:53.554121 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:55.554659 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:58.054078 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:00.054143 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:02.554570 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:04.555006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:07.054035 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:09.054348 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:11.553952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:13.554999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:16.053965 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:18.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:20.554048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:22.554698 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:24.554805 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:27.054859 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:29.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:32.054006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:34.054566 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:36.553998 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:38.554925 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:41.053999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:43.054974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:45.055139 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:47.553929 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:49.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:52.054720 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:54.555065 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:57.053952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:59.054069 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:01.054690 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:03.054791 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:05.554590 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:08.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:10.554264 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:13.054017 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:15.055138 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:17.554658 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:20.054087 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:22.554004 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:25.054003 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:27.054076 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:29.054524 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:31.054696 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:33.054864 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:35.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:37.554652 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:40.054927 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:42.554153 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:44.554944 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:47.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:49.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:51.553930 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:56.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:58.054190 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:00.055015 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:02.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:05.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:07.054571 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:09.054701 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:11.554344 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:13.554843 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:16.054030 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:18.054084 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:20.554242 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:22.554679 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:25.054999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:27.554852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:29.554968 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:32.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:34.554950 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:37.054712 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:39.554110 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:42.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:44.054891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:46.554013 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:49.054042 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:51.054085 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:53.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:55.554362 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:57.554656 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:59.554765 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:02.053955 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:04.553976 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:06.554902 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:09.054181 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:11.553974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:13.554028 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:15.554374 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:17.554945 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:19.054633 1077343 node_ready.go:38] duration metric: took 6m0.001135979s for node "no-preload-587009" to be "Ready" ...
	I1210 07:57:19.057729 1077343 out.go:203] 
	W1210 07:57:19.060573 1077343 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:57:19.060592 1077343 out.go:285] * 
	* 
	W1210 07:57:19.062943 1077343 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:57:19.065570 1077343 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-587009
helpers_test.go:244: (dbg) docker inspect no-preload-587009:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	        "Created": "2025-12-10T07:40:57.013300176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1077472,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:51:10.781643992Z",
	            "FinishedAt": "2025-12-10T07:51:09.433560094Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hosts",
	        "LogPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf-json.log",
	        "Name": "/no-preload-587009",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-587009:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-587009",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	                "LowerDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-587009",
	                "Source": "/var/lib/docker/volumes/no-preload-587009/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-587009",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-587009",
	                "name.minikube.sigs.k8s.io": "no-preload-587009",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3027da22b232bea75e393d2b661101d643e6e04216f3ba2ece99c7a84ae4f2ee",
	            "SandboxKey": "/var/run/docker/netns/3027da22b232",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-587009": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:01:16:c7:75:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93aee88a5c37e6ba01b74d7794f328193d01cfce9cb66379ab00b3f3e2e73a48",
	                    "EndpointID": "4717ce896d8375f79b53590f55b234cfc29918d126a12ae9fa574429e9722162",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-587009",
	                        "59d9b9413fb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009: exit status 2 (316.060964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-587009 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-587009 logs -n 25: (2.175920553s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p embed-certs-254586 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-254586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-237317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ stop    │ -p no-preload-587009 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ addons  │ enable dashboard -p no-preload-587009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │                     │
	│ stop    │ -p newest-cni-237317 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-237317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:51:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:51:14.495415 1078428 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:51:14.495519 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495524 1078428 out.go:374] Setting ErrFile to fd 2...
	I1210 07:51:14.495529 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495772 1078428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:51:14.496198 1078428 out.go:368] Setting JSON to false
	I1210 07:51:14.497022 1078428 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23599,"bootTime":1765329476,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:51:14.497081 1078428 start.go:143] virtualization:  
	I1210 07:51:14.500489 1078428 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:51:14.503586 1078428 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:51:14.503671 1078428 notify.go:221] Checking for updates...
	I1210 07:51:14.509469 1078428 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:51:14.512370 1078428 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:14.515169 1078428 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:51:14.518012 1078428 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:51:14.520797 1078428 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:51:14.527169 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:14.527731 1078428 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:51:14.566042 1078428 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:51:14.566172 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.628663 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.618086592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.628767 1078428 docker.go:319] overlay module found
	I1210 07:51:14.631981 1078428 out.go:179] * Using the docker driver based on existing profile
	I1210 07:51:14.634809 1078428 start.go:309] selected driver: docker
	I1210 07:51:14.634833 1078428 start.go:927] validating driver "docker" against &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.634946 1078428 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:51:14.635637 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.728404 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.713293715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.728788 1078428 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:51:14.728810 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:14.728854 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:14.728892 1078428 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.732274 1078428 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:51:14.735049 1078428 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:51:14.738088 1078428 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:51:14.740969 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:14.741011 1078428 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:51:14.741020 1078428 cache.go:65] Caching tarball of preloaded images
	I1210 07:51:14.741100 1078428 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:51:14.741110 1078428 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:51:14.741232 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:14.741437 1078428 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:51:14.763634 1078428 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:51:14.763653 1078428 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:51:14.763668 1078428 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:51:14.763698 1078428 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:14.763755 1078428 start.go:364] duration metric: took 40.304µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:51:14.763774 1078428 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:51:14.763779 1078428 fix.go:54] fixHost starting: 
	I1210 07:51:14.764055 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:14.807148 1078428 fix.go:112] recreateIfNeeded on newest-cni-237317: state=Stopped err=<nil>
	W1210 07:51:14.807188 1078428 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:51:10.742298 1077343 out.go:252] * Restarting existing docker container for "no-preload-587009" ...
	I1210 07:51:10.742407 1077343 cli_runner.go:164] Run: docker start no-preload-587009
	I1210 07:51:11.039727 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:11.064793 1077343 kic.go:430] container "no-preload-587009" state is running.
	I1210 07:51:11.065794 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:11.090953 1077343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json ...
	I1210 07:51:11.091180 1077343 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:11.091248 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:11.118540 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:11.118875 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:11.118891 1077343 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:11.119530 1077343 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38028->127.0.0.1:33840: read: connection reset by peer
	I1210 07:51:14.269979 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.270011 1077343 ubuntu.go:182] provisioning hostname "no-preload-587009"
	I1210 07:51:14.270115 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.295536 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.295890 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.295901 1077343 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-587009 && echo "no-preload-587009" | sudo tee /etc/hostname
	I1210 07:51:14.452920 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.453011 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.478828 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.479134 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.479150 1077343 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-587009' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-587009/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-587009' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:14.626210 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:14.626250 1077343 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:14.626279 1077343 ubuntu.go:190] setting up certificates
	I1210 07:51:14.626296 1077343 provision.go:84] configureAuth start
	I1210 07:51:14.626367 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:14.653396 1077343 provision.go:143] copyHostCerts
	I1210 07:51:14.653479 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:14.653501 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:14.653585 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:14.653695 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:14.653708 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:14.653739 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:14.653813 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:14.653823 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:14.653849 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:14.653913 1077343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.no-preload-587009 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-587009]
	I1210 07:51:14.987883 1077343 provision.go:177] copyRemoteCerts
	I1210 07:51:14.987956 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:14.988006 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.016190 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.122129 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:15.168648 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:15.209293 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:15.238881 1077343 provision.go:87] duration metric: took 612.568009ms to configureAuth
	I1210 07:51:15.238905 1077343 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:15.239106 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:15.239113 1077343 machine.go:97] duration metric: took 4.147925818s to provisionDockerMachine
	I1210 07:51:15.239121 1077343 start.go:293] postStartSetup for "no-preload-587009" (driver="docker")
	I1210 07:51:15.239133 1077343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:15.239186 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:15.239227 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.259116 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.370554 1077343 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:15.375386 1077343 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:15.375413 1077343 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:15.375424 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:15.375477 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:15.375560 1077343 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:15.375669 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:15.386817 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:15.415888 1077343 start.go:296] duration metric: took 176.733864ms for postStartSetup
	I1210 07:51:15.416018 1077343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:15.416065 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.439058 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.548495 1077343 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:15.553596 1077343 fix.go:56] duration metric: took 4.831668845s for fixHost
	I1210 07:51:15.553633 1077343 start.go:83] releasing machines lock for "no-preload-587009", held for 4.831730515s
	I1210 07:51:15.553722 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:15.586973 1077343 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:15.587034 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.587329 1077343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:15.587396 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.629146 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.634697 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.746290 1077343 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:15.838801 1077343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:15.843040 1077343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:15.843111 1077343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:15.851174 1077343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:15.851245 1077343 start.go:496] detecting cgroup driver to use...
	I1210 07:51:15.851294 1077343 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:15.851351 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:15.869860 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:15.883702 1077343 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:15.883777 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:15.899664 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:15.913011 1077343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:16.034801 1077343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:16.150617 1077343 docker.go:234] disabling docker service ...
	I1210 07:51:16.150759 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:16.165840 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:16.180309 1077343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:16.307789 1077343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:16.432072 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:16.444962 1077343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:16.459040 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:16.467874 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:16.476775 1077343 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:16.476842 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:16.485489 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.494113 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:16.502936 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.511763 1077343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:16.519893 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:16.528779 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:16.537342 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:16.546138 1077343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:16.553912 1077343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:16.561714 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:16.748597 1077343 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:16.865266 1077343 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:16.865408 1077343 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:16.869450 1077343 start.go:564] Will wait 60s for crictl version
	I1210 07:51:16.869562 1077343 ssh_runner.go:195] Run: which crictl
	I1210 07:51:16.873018 1077343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:16.900099 1077343 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:16.900218 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.923700 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.947379 1077343 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:16.950227 1077343 cli_runner.go:164] Run: docker network inspect no-preload-587009 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:16.965229 1077343 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:16.969175 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:16.978619 1077343 kubeadm.go:884] updating cluster {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:16.978743 1077343 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:16.978798 1077343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:17.014301 1077343 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:17.014333 1077343 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:17.014341 1077343 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:17.014532 1077343 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-587009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:17.014625 1077343 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:17.044039 1077343 cni.go:84] Creating CNI manager for ""
	I1210 07:51:17.044060 1077343 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:17.044082 1077343 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:51:17.044104 1077343 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-587009 NodeName:no-preload-587009 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:17.044222 1077343 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-587009"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:17.044289 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:17.052024 1077343 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:17.052101 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:17.059722 1077343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:17.072494 1077343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:17.086253 1077343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 07:51:17.099376 1077343 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:17.102883 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:17.112330 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:17.225530 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:17.246996 1077343 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009 for IP: 192.168.85.2
	I1210 07:51:17.247021 1077343 certs.go:195] generating shared ca certs ...
	I1210 07:51:17.247038 1077343 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.247186 1077343 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:17.247238 1077343 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:17.247248 1077343 certs.go:257] generating profile certs ...
	I1210 07:51:17.247347 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.key
	I1210 07:51:17.247407 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key.841ee17a
	I1210 07:51:17.247454 1077343 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key
	I1210 07:51:17.247566 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:17.247604 1077343 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:17.247617 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:17.247646 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:17.247674 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:17.247712 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:17.247768 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:17.248384 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:17.265969 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:17.284190 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:17.302881 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:17.324073 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:17.341990 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:51:17.359614 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:17.377843 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:51:17.395426 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:17.413039 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:17.430522 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:17.447821 1077343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:17.460777 1077343 ssh_runner.go:195] Run: openssl version
	I1210 07:51:17.467243 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.474706 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:17.482273 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.485950 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.486025 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.526902 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:17.534224 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.541448 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:17.549037 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552765 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552832 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.595755 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:17.603128 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.610926 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:17.618981 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622497 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622563 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.663609 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:17.670957 1077343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:17.674676 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:17.715746 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:17.758195 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:17.799081 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:17.840047 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:17.880964 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:17.921878 1077343 kubeadm.go:401] StartCluster: {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:17.921988 1077343 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:17.922092 1077343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:17.951649 1077343 cri.go:89] found id: ""
	I1210 07:51:17.951796 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:17.959534 1077343 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:17.959555 1077343 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:17.959635 1077343 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:17.966920 1077343 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:17.967331 1077343 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.967425 1077343 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-587009" cluster setting kubeconfig missing "no-preload-587009" context setting]
	I1210 07:51:17.967687 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.968903 1077343 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:17.977669 1077343 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:51:17.977707 1077343 kubeadm.go:602] duration metric: took 18.146766ms to restartPrimaryControlPlane
	I1210 07:51:17.977718 1077343 kubeadm.go:403] duration metric: took 55.849318ms to StartCluster
	I1210 07:51:17.977733 1077343 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.977796 1077343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.978427 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.978652 1077343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:17.978958 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:17.979006 1077343 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:17.979072 1077343 addons.go:70] Setting storage-provisioner=true in profile "no-preload-587009"
	I1210 07:51:17.979085 1077343 addons.go:239] Setting addon storage-provisioner=true in "no-preload-587009"
	I1210 07:51:17.979106 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979123 1077343 addons.go:70] Setting dashboard=true in profile "no-preload-587009"
	I1210 07:51:17.979139 1077343 addons.go:239] Setting addon dashboard=true in "no-preload-587009"
	W1210 07:51:17.979155 1077343 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:17.979179 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979564 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.979606 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.982091 1077343 addons.go:70] Setting default-storageclass=true in profile "no-preload-587009"
	I1210 07:51:17.982247 1077343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-587009"
	I1210 07:51:17.983173 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.984528 1077343 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:17.987357 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:18.030694 1077343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:18.030828 1077343 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:18.034622 1077343 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:14.810511 1078428 out.go:252] * Restarting existing docker container for "newest-cni-237317" ...
	I1210 07:51:14.810602 1078428 cli_runner.go:164] Run: docker start newest-cni-237317
	I1210 07:51:15.140257 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:15.163514 1078428 kic.go:430] container "newest-cni-237317" state is running.
	I1210 07:51:15.165120 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:15.200178 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:15.200425 1078428 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:15.200484 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:15.234652 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:15.234972 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:15.234980 1078428 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:15.238112 1078428 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:51:18.394621 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.394726 1078428 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:51:18.394818 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.424081 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.424400 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.424411 1078428 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:51:18.589360 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.589454 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.613196 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.613511 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.613536 1078428 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:18.750663 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:18.750693 1078428 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:18.750726 1078428 ubuntu.go:190] setting up certificates
	I1210 07:51:18.750745 1078428 provision.go:84] configureAuth start
	I1210 07:51:18.750808 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:18.768151 1078428 provision.go:143] copyHostCerts
	I1210 07:51:18.768234 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:18.768250 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:18.768328 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:18.768450 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:18.768462 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:18.768492 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:18.768566 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:18.768583 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:18.768617 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:18.768682 1078428 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:51:19.084729 1078428 provision.go:177] copyRemoteCerts
	I1210 07:51:19.084804 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:19.084849 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.104109 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.203019 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:19.223435 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:19.240802 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:19.257611 1078428 provision.go:87] duration metric: took 506.840522ms to configureAuth
	I1210 07:51:19.257643 1078428 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:19.257850 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:19.257864 1078428 machine.go:97] duration metric: took 4.057430572s to provisionDockerMachine
	I1210 07:51:19.257873 1078428 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:51:19.257887 1078428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:19.257947 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:19.257992 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.274867 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.371336 1078428 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:19.375463 1078428 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:19.375497 1078428 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:19.375509 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:19.375559 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:19.375641 1078428 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:19.375745 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:19.386080 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:19.406230 1078428 start.go:296] duration metric: took 148.339109ms for postStartSetup
	I1210 07:51:19.406314 1078428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:19.406379 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.424523 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:18.034780 1077343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.034793 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:18.034874 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.037543 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:18.037568 1077343 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:18.037639 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.041604 1077343 addons.go:239] Setting addon default-storageclass=true in "no-preload-587009"
	I1210 07:51:18.041645 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:18.042060 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:18.105147 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.114730 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.115497 1077343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.115511 1077343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:18.115563 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.135449 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.230094 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:18.264441 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.283658 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:18.283729 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:18.329062 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:18.329133 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:18.353549 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:18.353629 1077343 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:18.357622 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.376127 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:18.376202 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:18.447999 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:18.448021 1077343 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:18.470186 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:18.470208 1077343 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:18.489233 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:18.489255 1077343 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:18.503805 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:18.503828 1077343 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:18.521545 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:18.521566 1077343 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:18.536611 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.053453 1077343 node_ready.go:35] waiting up to 6m0s for node "no-preload-587009" to be "Ready" ...
	W1210 07:51:19.053800 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053834 1077343 retry.go:31] will retry after 261.467752ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.053883 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053894 1077343 retry.go:31] will retry after 368.94912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.054089 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.054104 1077343 retry.go:31] will retry after 338.426434ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.315446 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.382015 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.382044 1077343 retry.go:31] will retry after 337.060159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.393358 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.424101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:19.491743 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.491780 1077343 retry.go:31] will retry after 471.881278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.538786 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.538838 1077343 retry.go:31] will retry after 528.879721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.719721 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.790713 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.790742 1077343 retry.go:31] will retry after 510.29035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.964160 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:20.068233 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:20.070746 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.070792 1077343 retry.go:31] will retry after 543.265245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.148457 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.148492 1077343 retry.go:31] will retry after 460.630823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.301882 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:20.397427 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.397476 1077343 retry.go:31] will retry after 801.303312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.524843 1078428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:19.530920 1078428 fix.go:56] duration metric: took 4.767134196s for fixHost
	I1210 07:51:19.530943 1078428 start.go:83] releasing machines lock for "newest-cni-237317", held for 4.767180038s
	I1210 07:51:19.531010 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:19.550838 1078428 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:19.550877 1078428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:19.550890 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.550934 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.570871 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.573219 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.666233 1078428 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:19.757488 1078428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:19.762554 1078428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:19.762646 1078428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:19.772614 1078428 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:19.772688 1078428 start.go:496] detecting cgroup driver to use...
	I1210 07:51:19.772735 1078428 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:19.772810 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:19.790830 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:19.808563 1078428 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:19.808685 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:19.825219 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:19.839550 1078428 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:19.957848 1078428 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:20.106011 1078428 docker.go:234] disabling docker service ...
	I1210 07:51:20.106089 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:20.124597 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:20.139030 1078428 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:20.264730 1078428 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:20.405057 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:20.418041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:20.434060 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:20.443707 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:20.453162 1078428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:20.453287 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:20.462485 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.471477 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:20.480685 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.489771 1078428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:20.498259 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:20.507883 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:20.516803 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:20.525782 1078428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:20.533254 1078428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:20.540718 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:20.693669 1078428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:20.831153 1078428 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:20.831249 1078428 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:20.835049 1078428 start.go:564] Will wait 60s for crictl version
	I1210 07:51:20.835127 1078428 ssh_runner.go:195] Run: which crictl
	I1210 07:51:20.838628 1078428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:20.863125 1078428 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:20.863217 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.884709 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.910533 1078428 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:20.913646 1078428 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:20.930416 1078428 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:20.934716 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:20.948181 1078428 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:51:20.951046 1078428 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:20.951211 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:20.951303 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:20.976663 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:20.976691 1078428 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:51:20.976756 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:21.000721 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:21.000745 1078428 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:21.000753 1078428 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:21.000851 1078428 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:21.000919 1078428 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:21.027129 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:21.027160 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:21.027182 1078428 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:51:21.027206 1078428 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:21.027326 1078428 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:21.027402 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:21.035339 1078428 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:21.035477 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:21.043040 1078428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:21.056144 1078428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:21.068486 1078428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:51:21.080830 1078428 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:21.084334 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:21.093747 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:21.227754 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:21.255098 1078428 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:51:21.255120 1078428 certs.go:195] generating shared ca certs ...
	I1210 07:51:21.255146 1078428 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:21.255299 1078428 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:21.255358 1078428 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:21.255372 1078428 certs.go:257] generating profile certs ...
	I1210 07:51:21.255486 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:51:21.255553 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:51:21.255599 1078428 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:51:21.255719 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:21.255759 1078428 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:21.255770 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:21.255801 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:21.255838 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:21.255870 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:21.255919 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:21.256545 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:21.311093 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:21.352581 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:21.373410 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:21.394506 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:21.429692 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:51:21.462387 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:21.492668 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:51:21.520168 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:21.538625 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:21.556477 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:21.574823 1078428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:21.587970 1078428 ssh_runner.go:195] Run: openssl version
	I1210 07:51:21.594082 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.601606 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:21.609233 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613206 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613303 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.655122 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:21.662415 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.669633 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:21.677051 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680913 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680973 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.722892 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:21.730172 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.737341 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:21.744828 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748681 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748767 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.790554 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:21.797952 1078428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:21.801618 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:21.842558 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:21.883251 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:21.924099 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:21.965360 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:22.007244 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:22.049094 1078428 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:22.049233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:22.049334 1078428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:22.093879 1078428 cri.go:89] found id: ""
	I1210 07:51:22.094034 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:22.108858 1078428 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:22.108920 1078428 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:22.109002 1078428 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:22.119866 1078428 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:22.120478 1078428 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-237317" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.120794 1078428 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-237317" cluster setting kubeconfig missing "newest-cni-237317" context setting]
	I1210 07:51:22.121355 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.123034 1078428 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:22.139211 1078428 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:51:22.139284 1078428 kubeadm.go:602] duration metric: took 30.344057ms to restartPrimaryControlPlane
	I1210 07:51:22.139309 1078428 kubeadm.go:403] duration metric: took 90.22699ms to StartCluster
	I1210 07:51:22.139351 1078428 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.139430 1078428 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.140615 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.141197 1078428 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:22.141378 1078428 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:22.149299 1078428 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-237317"
	I1210 07:51:22.149322 1078428 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-237317"
	I1210 07:51:22.149353 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.149966 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.141985 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:22.150417 1078428 addons.go:70] Setting dashboard=true in profile "newest-cni-237317"
	I1210 07:51:22.150441 1078428 addons.go:239] Setting addon dashboard=true in "newest-cni-237317"
	W1210 07:51:22.150449 1078428 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:22.150502 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.151022 1078428 addons.go:70] Setting default-storageclass=true in profile "newest-cni-237317"
	I1210 07:51:22.151064 1078428 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-237317"
	I1210 07:51:22.151139 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.151406 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.154353 1078428 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:22.159801 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:22.209413 1078428 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:22.216779 1078428 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.216810 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:22.216899 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.223328 1078428 addons.go:239] Setting addon default-storageclass=true in "newest-cni-237317"
	I1210 07:51:22.223372 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.223787 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.224255 1078428 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:22.227259 1078428 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:22.230643 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:22.230670 1078428 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:22.230738 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.262205 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.304886 1078428 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:22.304913 1078428 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:22.305020 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.320571 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.350629 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.414331 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.428355 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:22.476480 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:22.476506 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:22.499604 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.511381 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.511434 1078428 retry.go:31] will retry after 354.449722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.512377 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:22.512398 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:22.525695 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:22.525721 1078428 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:22.549890 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:22.549921 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:22.571318 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:22.571360 1078428 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:22.590078 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:22.590107 1078428 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:22.605317 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:22.605341 1078428 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:22.618168 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:22.618200 1078428 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:22.632058 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.632138 1078428 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:22.645108 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.866802 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:23.047272 1078428 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:51:23.047355 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:23.047482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047505 1078428 retry.go:31] will retry after 239.047353ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047709 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047727 1078428 retry.go:31] will retry after 188.716917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047796 1078428 retry.go:31] will retry after 517.712293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.237633 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:23.287256 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.302152 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.302252 1078428 retry.go:31] will retry after 469.586518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.346821 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.346867 1078428 retry.go:31] will retry after 517.463027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.548102 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.566734 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:23.638131 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.638161 1078428 retry.go:31] will retry after 398.122111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.772509 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.859471 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.859510 1078428 retry.go:31] will retry after 826.751645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.865483 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.933950 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.933981 1078428 retry.go:31] will retry after 776.320293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.037254 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:24.047892 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:24.103304 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.103348 1078428 retry.go:31] will retry after 781.805737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.609734 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:20.615162 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:20.763154 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763202 1077343 retry.go:31] will retry after 629.698549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.763322 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763340 1077343 retry.go:31] will retry after 624.408887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.054168 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:21.199599 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:21.288128 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.288156 1077343 retry.go:31] will retry after 1.429543278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.388486 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:21.393905 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:21.513396 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.513426 1077343 retry.go:31] will retry after 1.363983036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.522339 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.522370 1077343 retry.go:31] will retry after 1.881789089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.718226 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:22.784732 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.784765 1077343 retry.go:31] will retry after 2.14784628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.877998 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.948118 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.948146 1077343 retry.go:31] will retry after 2.832610868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:23.404396 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.467879 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.467914 1077343 retry.go:31] will retry after 2.135960827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.933362 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.999854 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.999895 1077343 retry.go:31] will retry after 3.6382738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.548307 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.687434 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:24.711319 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:24.773539 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.773577 1078428 retry.go:31] will retry after 997.771985ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:24.790786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.790863 1078428 retry.go:31] will retry after 982.839582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.886098 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.963470 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.963508 1078428 retry.go:31] will retry after 1.65409552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.047816 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.547590 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.771778 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:25.774151 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.936732 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.936801 1078428 retry.go:31] will retry after 1.015181303s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:25.947734 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.947767 1078428 retry.go:31] will retry after 1.482437442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.048146 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.547461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.617808 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:26.678401 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.678435 1078428 retry.go:31] will retry after 1.557494695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.952842 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.019482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.019568 1078428 retry.go:31] will retry after 1.273355747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.047573 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.431325 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:27.498014 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.498046 1078428 retry.go:31] will retry after 1.046464225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.548153 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.236708 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:28.293309 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:28.313086 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.313117 1078428 retry.go:31] will retry after 2.925748723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.376082 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.376136 1078428 retry.go:31] will retry after 3.458373128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.545585 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:28.548098 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:28.611335 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.611369 1078428 retry.go:31] will retry after 3.856495335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.047665 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:25.554994 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:25.604337 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:25.669224 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.669262 1077343 retry.go:31] will retry after 2.194006804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.781321 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.929708 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.929740 1077343 retry.go:31] will retry after 3.276039002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.863966 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.927673 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.927709 1077343 retry.go:31] will retry after 5.303571514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.054575 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:28.639292 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:28.698653 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.698686 1077343 retry.go:31] will retry after 3.005783671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.206806 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:29.264930 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.264960 1077343 retry.go:31] will retry after 2.489245949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.547947 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.047725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.548382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.048336 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.239688 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:31.305382 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.305411 1078428 retry.go:31] will retry after 5.48588333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.547900 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.835667 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:31.907250 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.907288 1078428 retry.go:31] will retry after 3.413940388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.047433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.468741 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:32.529582 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.529616 1078428 retry.go:31] will retry after 2.765741211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.547808 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.048388 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.547638 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.048299 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:30.554528 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:31.705403 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:31.754983 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:31.764053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.764088 1077343 retry.go:31] will retry after 6.263299309s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:31.824900 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.824937 1077343 retry.go:31] will retry after 8.063912103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:32.554572 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:33.232049 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:33.291801 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:33.291838 1077343 retry.go:31] will retry after 5.361341065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:34.554757 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:34.547845 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.048329 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.295932 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:35.322379 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:35.361522 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.361555 1078428 retry.go:31] will retry after 3.648316362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:35.394430 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.394485 1078428 retry.go:31] will retry after 5.549499405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.547462 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.048235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.547640 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.792053 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:36.857078 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:36.857110 1078428 retry.go:31] will retry after 8.697501731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:37.048326 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.548396 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.047529 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.547464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.010651 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:39.048217 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:39.071638 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.071669 1078428 retry.go:31] will retry after 13.355816146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:37.053891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:38.027881 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:38.116733 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.116768 1077343 retry.go:31] will retry after 12.105620641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.653613 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:38.715053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.715087 1077343 retry.go:31] will retry after 11.375750542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:39.554885 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:39.889521 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:39.947993 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.948032 1077343 retry.go:31] will retry after 6.34767532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.547555 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.048271 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.548333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.944176 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:41.005827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.005869 1078428 retry.go:31] will retry after 6.58383212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.047819 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:41.547642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.048470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.547646 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.047482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.548313 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:44.048345 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:42.054758 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:44.554149 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:44.547780 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.048251 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.547682 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.555791 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:45.648631 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:45.648667 1078428 retry.go:31] will retry after 11.694093059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.048267 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:46.547745 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.047711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.547488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.590140 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:47.657175 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:47.657216 1078428 retry.go:31] will retry after 17.707179987s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:48.047554 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.547523 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:49.048229 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:46.296554 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:46.375385 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.375418 1077343 retry.go:31] will retry after 17.860418691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:47.054540 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:49.054867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:50.091584 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:50.153219 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.153253 1077343 retry.go:31] will retry after 15.008999648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.223406 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:50.279259 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.279296 1077343 retry.go:31] will retry after 9.416080018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:49.547855 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.048310 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.547470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.048482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.547803 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.048220 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.428493 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:52.490932 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.490967 1078428 retry.go:31] will retry after 16.825164958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.548145 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.047509 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.548344 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.047578 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:51.553954 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:54.547773 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.047551 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.547690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.047804 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.547512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.048500 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.343638 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:57.401827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.401862 1078428 retry.go:31] will retry after 12.086669618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.548118 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.547566 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:59.047512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:56.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:58.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:59.696250 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:59.757338 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:59.757373 1077343 retry.go:31] will retry after 26.778697297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:59.547820 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.048277 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.547702 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.047690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.548160 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.047532 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.547658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.048174 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.547494 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:04.047488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:01.054130 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:03.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:04.236888 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:04.303052 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:04.303083 1077343 retry.go:31] will retry after 25.859676141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.163286 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.227326 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.227361 1077343 retry.go:31] will retry after 29.528693098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:04.547752 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.047684 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.364684 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.426426 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.426483 1078428 retry.go:31] will retry after 20.310563443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.547649 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.547647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.048386 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.548191 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.047499 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.547510 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.047557 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.316912 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:09.386785 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.386818 1078428 retry.go:31] will retry after 17.689212788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.489070 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:06.053981 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:08.554858 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:09.547482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:09.552880 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.552917 1078428 retry.go:31] will retry after 27.483688335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:10.047697 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:10.548124 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.047626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.548296 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.048335 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.548247 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.047495 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.547530 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:14.047549 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:11.053980 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:13.054863 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:15.055109 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:14.547736 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.548227 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.047516 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.548114 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.047567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.547679 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.048185 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.548203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:19.047660 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:17.055513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:19.553887 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:19.547978 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.048384 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.548389 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.048134 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.547434 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.048274 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.547540 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:22.547641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:22.572419 1078428 cri.go:89] found id: ""
	I1210 07:52:22.572446 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.572457 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:22.572464 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:22.572530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:22.596895 1078428 cri.go:89] found id: ""
	I1210 07:52:22.596923 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.596931 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:22.596938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:22.597000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:22.621678 1078428 cri.go:89] found id: ""
	I1210 07:52:22.621705 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.621713 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:22.621720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:22.621783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:22.646160 1078428 cri.go:89] found id: ""
	I1210 07:52:22.646188 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.646198 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:22.646205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:22.646270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:22.671641 1078428 cri.go:89] found id: ""
	I1210 07:52:22.671670 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.671680 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:22.671686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:22.671750 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:22.697149 1078428 cri.go:89] found id: ""
	I1210 07:52:22.697177 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.697187 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:22.697194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:22.697255 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:22.722276 1078428 cri.go:89] found id: ""
	I1210 07:52:22.722300 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.722318 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:22.722324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:22.722388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:22.751396 1078428 cri.go:89] found id: ""
	I1210 07:52:22.751422 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.751431 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:22.751440 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:22.751452 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:22.806571 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:22.806611 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:22.824584 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:22.824623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:22.902683 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:22.902704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:22.902719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:22.928289 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:22.928326 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:52:21.554922 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:24.054424 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:25.461464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:25.472201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:25.472303 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:25.498226 1078428 cri.go:89] found id: ""
	I1210 07:52:25.498253 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.498263 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:25.498269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:25.498331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:25.524731 1078428 cri.go:89] found id: ""
	I1210 07:52:25.524759 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.524777 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:25.524789 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:25.524855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:25.554155 1078428 cri.go:89] found id: ""
	I1210 07:52:25.554178 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.554187 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:25.554194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:25.554252 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:25.580553 1078428 cri.go:89] found id: ""
	I1210 07:52:25.580584 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.580593 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:25.580599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:25.580669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:25.606241 1078428 cri.go:89] found id: ""
	I1210 07:52:25.606309 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.606341 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:25.606369 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:25.606449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:25.630882 1078428 cri.go:89] found id: ""
	I1210 07:52:25.630912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.630921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:25.630928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:25.631028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:25.657178 1078428 cri.go:89] found id: ""
	I1210 07:52:25.657207 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.657215 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:25.657221 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:25.657282 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:25.686580 1078428 cri.go:89] found id: ""
	I1210 07:52:25.686604 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.686612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:25.686622 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:25.686634 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:25.737209 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:52:25.742985 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:25.743060 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:52:25.816909 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.817156 1078428 retry.go:31] will retry after 25.212576039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.818420 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:25.818454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:25.889855 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:25.889919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:25.889939 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:25.915022 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:25.915058 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:27.076870 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:27.134892 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:27.134924 1078428 retry.go:31] will retry after 48.20102621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:28.443268 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:28.454097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:28.454172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:28.482759 1078428 cri.go:89] found id: ""
	I1210 07:52:28.482789 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.482798 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:28.482805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:28.482868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:28.507737 1078428 cri.go:89] found id: ""
	I1210 07:52:28.507760 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.507769 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:28.507775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:28.507836 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:28.532881 1078428 cri.go:89] found id: ""
	I1210 07:52:28.532907 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.532916 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:28.532923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:28.532989 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:28.562425 1078428 cri.go:89] found id: ""
	I1210 07:52:28.562451 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.562460 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:28.562489 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:28.562551 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:28.587926 1078428 cri.go:89] found id: ""
	I1210 07:52:28.587952 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.587961 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:28.587967 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:28.588026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:28.613523 1078428 cri.go:89] found id: ""
	I1210 07:52:28.613593 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.613617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:28.613638 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:28.613730 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:28.637796 1078428 cri.go:89] found id: ""
	I1210 07:52:28.637864 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.637888 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:28.637907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:28.637993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:28.666907 1078428 cri.go:89] found id: ""
	I1210 07:52:28.666937 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.666946 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:28.666956 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:28.666968 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:28.722569 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:28.722604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:28.738517 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:28.738592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:28.814307 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:28.814366 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:28.814395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:28.842824 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:28.842905 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:26.536333 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:26.554155 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:26.621759 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:26.621788 1077343 retry.go:31] will retry after 32.881374862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:29.054917 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:30.163626 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:30.226039 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:30.226073 1077343 retry.go:31] will retry after 27.175178767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:31.380548 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:31.391083 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:31.391159 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:31.416470 1078428 cri.go:89] found id: ""
	I1210 07:52:31.416496 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.416504 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:31.416510 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:31.416570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:31.441740 1078428 cri.go:89] found id: ""
	I1210 07:52:31.441767 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.441776 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:31.441782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:31.441843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:31.465834 1078428 cri.go:89] found id: ""
	I1210 07:52:31.465860 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.465869 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:31.465875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:31.465935 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:31.492061 1078428 cri.go:89] found id: ""
	I1210 07:52:31.492085 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.492093 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:31.492099 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:31.492177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:31.515891 1078428 cri.go:89] found id: ""
	I1210 07:52:31.515971 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.515993 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:31.516010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:31.516096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:31.540039 1078428 cri.go:89] found id: ""
	I1210 07:52:31.540061 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.540069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:31.540076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:31.540169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:31.565345 1078428 cri.go:89] found id: ""
	I1210 07:52:31.565372 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.565388 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:31.565395 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:31.565513 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:31.590011 1078428 cri.go:89] found id: ""
	I1210 07:52:31.590035 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.590044 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:31.590074 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:31.590089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:31.656796 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:31.656816 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:31.656828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:31.681821 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:31.681855 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:31.709786 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:31.709815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:31.764688 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:31.764728 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.283681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:34.296241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:34.296314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:34.337179 1078428 cri.go:89] found id: ""
	I1210 07:52:34.337201 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.337210 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:34.337216 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:34.337274 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:34.369583 1078428 cri.go:89] found id: ""
	I1210 07:52:34.369611 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.369619 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:34.369625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:34.369683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:34.395566 1078428 cri.go:89] found id: ""
	I1210 07:52:34.395591 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.395600 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:34.395606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:34.395688 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:34.419610 1078428 cri.go:89] found id: ""
	I1210 07:52:34.419677 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.419702 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:34.419718 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:34.419797 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:34.444441 1078428 cri.go:89] found id: ""
	I1210 07:52:34.444511 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.444535 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:34.444550 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:34.444627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:34.469517 1078428 cri.go:89] found id: ""
	I1210 07:52:34.469540 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.469549 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:34.469556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:34.469618 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:34.494093 1078428 cri.go:89] found id: ""
	I1210 07:52:34.494120 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.494129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:34.494136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:34.494196 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	W1210 07:52:31.554771 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:34.054729 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:34.756990 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:34.831836 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:34.831956 1077343 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:34.518575 1078428 cri.go:89] found id: ""
	I1210 07:52:34.518658 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.518674 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:34.518685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:34.518698 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.534743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:34.534770 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:34.597542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:34.597564 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:34.597577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:34.622841 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:34.622876 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:34.653362 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:34.653395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.036872 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:37.117418 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.117451 1078428 retry.go:31] will retry after 42.271832156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.209642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:37.220263 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:37.220360 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:37.244517 1078428 cri.go:89] found id: ""
	I1210 07:52:37.244544 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.244552 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:37.244558 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:37.244619 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:37.269073 1078428 cri.go:89] found id: ""
	I1210 07:52:37.269099 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.269108 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:37.269114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:37.269175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:37.292561 1078428 cri.go:89] found id: ""
	I1210 07:52:37.292587 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.292596 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:37.292604 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:37.292661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:37.330286 1078428 cri.go:89] found id: ""
	I1210 07:52:37.330312 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.330321 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:37.330328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:37.330388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:37.362527 1078428 cri.go:89] found id: ""
	I1210 07:52:37.362555 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.362564 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:37.362570 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:37.362633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:37.387887 1078428 cri.go:89] found id: ""
	I1210 07:52:37.387912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.387921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:37.387927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:37.387988 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:37.412303 1078428 cri.go:89] found id: ""
	I1210 07:52:37.412329 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.412337 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:37.412344 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:37.412451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:37.436571 1078428 cri.go:89] found id: ""
	I1210 07:52:37.436596 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.436605 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:37.436614 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:37.436626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:37.462030 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:37.462074 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:37.489847 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:37.489875 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.545757 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:37.545792 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:37.561730 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:37.561763 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:37.627065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:52:36.554875 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:39.054027 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:40.127737 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:40.139792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:40.139876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:40.166917 1078428 cri.go:89] found id: ""
	I1210 07:52:40.166944 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.166952 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:40.166964 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:40.167028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:40.193972 1078428 cri.go:89] found id: ""
	I1210 07:52:40.194000 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.194009 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:40.194015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:40.194111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:40.226660 1078428 cri.go:89] found id: ""
	I1210 07:52:40.226693 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.226702 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:40.226709 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:40.226774 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:40.257013 1078428 cri.go:89] found id: ""
	I1210 07:52:40.257056 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.257067 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:40.257074 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:40.257140 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:40.282449 1078428 cri.go:89] found id: ""
	I1210 07:52:40.282500 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.282509 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:40.282516 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:40.282580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:40.332986 1078428 cri.go:89] found id: ""
	I1210 07:52:40.333018 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.333027 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:40.333050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:40.333188 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:40.366223 1078428 cri.go:89] found id: ""
	I1210 07:52:40.366258 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.366268 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:40.366275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:40.366347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:40.393136 1078428 cri.go:89] found id: ""
	I1210 07:52:40.393163 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.393171 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:40.393181 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:40.393193 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:40.422285 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:40.422314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:40.481326 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:40.481365 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:40.497675 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:40.497725 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:40.562074 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:40.562093 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:40.562106 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:43.088690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:43.099750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:43.099828 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:43.124516 1078428 cri.go:89] found id: ""
	I1210 07:52:43.124552 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.124561 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:43.124567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:43.124628 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:43.153325 1078428 cri.go:89] found id: ""
	I1210 07:52:43.153347 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.153356 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:43.153362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:43.153423 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:43.178405 1078428 cri.go:89] found id: ""
	I1210 07:52:43.178429 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.178437 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:43.178443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:43.178609 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:43.201768 1078428 cri.go:89] found id: ""
	I1210 07:52:43.201791 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.201800 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:43.201806 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:43.201865 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:43.225907 1078428 cri.go:89] found id: ""
	I1210 07:52:43.225931 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.225940 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:43.225946 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:43.226004 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:43.250803 1078428 cri.go:89] found id: ""
	I1210 07:52:43.250828 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.250837 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:43.250843 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:43.250916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:43.275081 1078428 cri.go:89] found id: ""
	I1210 07:52:43.275147 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.275161 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:43.275168 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:43.275245 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:43.306794 1078428 cri.go:89] found id: ""
	I1210 07:52:43.306827 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.306836 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:43.306845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:43.306857 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:43.337826 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:43.337854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:43.396050 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:43.396089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:43.413002 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:43.413031 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:43.479541 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:43.479565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:43.479578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:52:41.054361 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:43.054892 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:46.005454 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:46.017579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:46.017658 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:46.053539 1078428 cri.go:89] found id: ""
	I1210 07:52:46.053570 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.053579 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:46.053585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:46.053649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:46.088548 1078428 cri.go:89] found id: ""
	I1210 07:52:46.088572 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.088581 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:46.088596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:46.088660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:46.126497 1078428 cri.go:89] found id: ""
	I1210 07:52:46.126571 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.126594 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:46.126613 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:46.126734 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:46.150556 1078428 cri.go:89] found id: ""
	I1210 07:52:46.150626 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.150643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:46.150651 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:46.150719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:46.174996 1078428 cri.go:89] found id: ""
	I1210 07:52:46.175019 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.175027 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:46.175033 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:46.175107 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:46.199701 1078428 cri.go:89] found id: ""
	I1210 07:52:46.199726 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.199735 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:46.199742 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:46.199845 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:46.224632 1078428 cri.go:89] found id: ""
	I1210 07:52:46.224657 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.224666 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:46.224672 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:46.224752 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:46.248234 1078428 cri.go:89] found id: ""
	I1210 07:52:46.248259 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.248267 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:46.248277 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:46.248334 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:46.264183 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:46.264221 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:46.342979 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:46.343063 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:46.343092 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:46.369476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:46.369511 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:46.397302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:46.397339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.952567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.962857 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.962931 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.992562 1078428 cri.go:89] found id: ""
	I1210 07:52:48.992589 1078428 logs.go:282] 0 containers: []
	W1210 07:52:48.992599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.992606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.992671 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:49.018277 1078428 cri.go:89] found id: ""
	I1210 07:52:49.018303 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.018312 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:49.018318 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:49.018387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:49.045715 1078428 cri.go:89] found id: ""
	I1210 07:52:49.045743 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.045752 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:49.045758 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:49.045826 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:49.083318 1078428 cri.go:89] found id: ""
	I1210 07:52:49.083348 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.083358 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:49.083364 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:49.083422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:49.109936 1078428 cri.go:89] found id: ""
	I1210 07:52:49.109958 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.109966 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:49.109989 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:49.110049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:49.134580 1078428 cri.go:89] found id: ""
	I1210 07:52:49.134607 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.134617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:49.134623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:49.134681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:49.159828 1078428 cri.go:89] found id: ""
	I1210 07:52:49.159906 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.159924 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:49.159931 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:49.160011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:49.184837 1078428 cri.go:89] found id: ""
	I1210 07:52:49.184862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.184872 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:49.184881 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:49.184902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:49.210656 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:49.210691 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:49.241224 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:49.241256 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:49.303253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:49.303297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:49.319808 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:49.319838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:49.389423 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:52:45.554347 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:47.554702 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:50.054996 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:51.030067 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:51.093289 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:51.093415 1078428 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:51.889686 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.900249 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.900353 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.925533 1078428 cri.go:89] found id: ""
	I1210 07:52:51.925559 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.925567 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.925621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.925706 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.950161 1078428 cri.go:89] found id: ""
	I1210 07:52:51.950186 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.950194 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.950201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.950280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.976938 1078428 cri.go:89] found id: ""
	I1210 07:52:51.976964 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.976972 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.976979 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.977038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:52.006745 1078428 cri.go:89] found id: ""
	I1210 07:52:52.006841 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.006865 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:52.006887 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:52.007015 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:52.033557 1078428 cri.go:89] found id: ""
	I1210 07:52:52.033585 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.033595 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:52.033601 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:52.033672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:52.066821 1078428 cri.go:89] found id: ""
	I1210 07:52:52.066850 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.066860 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:52.066867 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:52.066929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:52.101024 1078428 cri.go:89] found id: ""
	I1210 07:52:52.101051 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.101060 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:52.101067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:52.101128 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:52.130045 1078428 cri.go:89] found id: ""
	I1210 07:52:52.130070 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.130079 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:52.130088 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:52.130100 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:52.184627 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:52.184662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:52.200733 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:52.200759 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:52.265577 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:52.265610 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:52.265626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:52.291354 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:52.291390 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:52:52.555048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:55.054639 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:54.834203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:54.845400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:54.845510 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.871357 1078428 cri.go:89] found id: ""
	I1210 07:52:54.871383 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.871392 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.871399 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.871463 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.897322 1078428 cri.go:89] found id: ""
	I1210 07:52:54.897352 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.897360 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.897366 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.897425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.922291 1078428 cri.go:89] found id: ""
	I1210 07:52:54.922320 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.922329 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.922334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.922405 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.947056 1078428 cri.go:89] found id: ""
	I1210 07:52:54.947080 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.947089 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.947095 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.947155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.972572 1078428 cri.go:89] found id: ""
	I1210 07:52:54.972599 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.972608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.972614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.972675 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.997657 1078428 cri.go:89] found id: ""
	I1210 07:52:54.997685 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.997694 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.997700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.997777 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:55.025796 1078428 cri.go:89] found id: ""
	I1210 07:52:55.025819 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.025829 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:55.025835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:55.026185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:55.069593 1078428 cri.go:89] found id: ""
	I1210 07:52:55.069631 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.069640 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:55.069649 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:55.069662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:55.135748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:55.135788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:55.151784 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:55.151815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:55.220457 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:55.220480 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:55.220495 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:55.245834 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:55.245869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.774707 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:57.785110 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:57.785178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:57.810275 1078428 cri.go:89] found id: ""
	I1210 07:52:57.810302 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.810320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:57.810328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:57.810389 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.838839 1078428 cri.go:89] found id: ""
	I1210 07:52:57.838862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.838871 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.838877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.838937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.863185 1078428 cri.go:89] found id: ""
	I1210 07:52:57.863212 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.863221 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.863227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.863287 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.890204 1078428 cri.go:89] found id: ""
	I1210 07:52:57.890234 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.890244 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.890250 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.890314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.916593 1078428 cri.go:89] found id: ""
	I1210 07:52:57.916616 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.916624 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.916630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.916690 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.940351 1078428 cri.go:89] found id: ""
	I1210 07:52:57.940373 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.940381 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.940387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.940448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.965417 1078428 cri.go:89] found id: ""
	I1210 07:52:57.965453 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.965462 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.965469 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:57.965535 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:57.989157 1078428 cri.go:89] found id: ""
	I1210 07:52:57.989183 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.989192 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:57.989202 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:57.989213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:58.015326 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:58.015366 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:58.055222 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:58.055248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:58.115866 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:58.115945 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:58.131823 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:58.131852 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:58.196880 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:57.402101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:57.460754 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:57.460865 1077343 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:52:57.554262 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:59.503589 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:59.554549 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:59.576553 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:59.576655 1077343 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:59.579701 1077343 out.go:179] * Enabled addons: 
	I1210 07:52:59.582536 1077343 addons.go:530] duration metric: took 1m41.60352286s for enable addons: enabled=[]
	I1210 07:53:00.697148 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:00.707593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:00.707661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:00.735938 1078428 cri.go:89] found id: ""
	I1210 07:53:00.735962 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.735971 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:00.735977 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:00.736039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:00.759785 1078428 cri.go:89] found id: ""
	I1210 07:53:00.759808 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.759817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:00.759823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:00.759887 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:00.784529 1078428 cri.go:89] found id: ""
	I1210 07:53:00.784552 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.784561 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:00.784567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:00.784641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.813420 1078428 cri.go:89] found id: ""
	I1210 07:53:00.813443 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.813452 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.813459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.813518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.838413 1078428 cri.go:89] found id: ""
	I1210 07:53:00.838439 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.838449 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.838455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.838559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.862923 1078428 cri.go:89] found id: ""
	I1210 07:53:00.862949 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.862968 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.862975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.863034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.890339 1078428 cri.go:89] found id: ""
	I1210 07:53:00.890366 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.890375 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.890381 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:00.890440 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:00.916963 1078428 cri.go:89] found id: ""
	I1210 07:53:00.916992 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.917001 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:00.917010 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.917022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.972565 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.972601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:00.990064 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.990154 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:01.068497 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:01.068521 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:01.068534 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:01.097602 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:01.097641 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.628666 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.639440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.639518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.664498 1078428 cri.go:89] found id: ""
	I1210 07:53:03.664523 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.664531 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.664538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.664601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.688357 1078428 cri.go:89] found id: ""
	I1210 07:53:03.688382 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.688391 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.688397 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.688460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.712874 1078428 cri.go:89] found id: ""
	I1210 07:53:03.712898 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.712906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.712913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.712990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.737610 1078428 cri.go:89] found id: ""
	I1210 07:53:03.737635 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.737643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.737650 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.737712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.762668 1078428 cri.go:89] found id: ""
	I1210 07:53:03.762695 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.762703 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.762710 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.762769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.795710 1078428 cri.go:89] found id: ""
	I1210 07:53:03.795732 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.795741 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.795747 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.795809 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.819247 1078428 cri.go:89] found id: ""
	I1210 07:53:03.819275 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.819285 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.819291 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:03.819355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:03.842854 1078428 cri.go:89] found id: ""
	I1210 07:53:03.842881 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.842891 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:03.842900 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.842911 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.858681 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.858748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.922352 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.922383 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:03.922401 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:03.948481 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.948520 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.977218 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.977247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:02.054010 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:04.555038 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:06.532410 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.544357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.544451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.576472 1078428 cri.go:89] found id: ""
	I1210 07:53:06.576500 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.576511 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.576517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.576581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.609024 1078428 cri.go:89] found id: ""
	I1210 07:53:06.609051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.609061 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.609067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.609134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.636182 1078428 cri.go:89] found id: ""
	I1210 07:53:06.636209 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.636218 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.636224 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.636286 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.664610 1078428 cri.go:89] found id: ""
	I1210 07:53:06.664677 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.664699 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.664720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.664812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.690522 1078428 cri.go:89] found id: ""
	I1210 07:53:06.690548 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.690557 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.690564 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.690626 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.716006 1078428 cri.go:89] found id: ""
	I1210 07:53:06.716035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.716044 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.716050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.716115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.740705 1078428 cri.go:89] found id: ""
	I1210 07:53:06.740726 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.740734 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.740741 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:06.740803 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:06.764831 1078428 cri.go:89] found id: ""
	I1210 07:53:06.764852 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.764860 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:06.764869 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.764881 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.820337 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.820372 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:06.836899 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.836931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.902143 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.902164 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:06.902178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:06.927253 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.927289 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.458854 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:09.469382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:09.469466 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.494769 1078428 cri.go:89] found id: ""
	I1210 07:53:09.494791 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.494799 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.494805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.494866 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1210 07:53:07.053986 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:09.554520 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:09.520347 1078428 cri.go:89] found id: ""
	I1210 07:53:09.520374 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.520383 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.520390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.520454 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.549983 1078428 cri.go:89] found id: ""
	I1210 07:53:09.550010 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.550019 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.550025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.550085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.588794 1078428 cri.go:89] found id: ""
	I1210 07:53:09.588821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.588830 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.588836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.588895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.617370 1078428 cri.go:89] found id: ""
	I1210 07:53:09.617393 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.617401 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.617407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.617465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.645730 1078428 cri.go:89] found id: ""
	I1210 07:53:09.645755 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.645779 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.645786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.645850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.672062 1078428 cri.go:89] found id: ""
	I1210 07:53:09.672088 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.672097 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.672103 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:09.672174 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:09.695770 1078428 cri.go:89] found id: ""
	I1210 07:53:09.695793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.695802 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:09.695811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:09.695822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:09.721144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.721180 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.748337 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.748367 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.802348 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.802384 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.818196 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.818226 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.884770 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.385627 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:12.396288 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:12.396367 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:12.421158 1078428 cri.go:89] found id: ""
	I1210 07:53:12.421194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.421204 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:12.421210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:12.421281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:12.446171 1078428 cri.go:89] found id: ""
	I1210 07:53:12.446206 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.446216 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:12.446222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:12.446294 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.470791 1078428 cri.go:89] found id: ""
	I1210 07:53:12.470818 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.470828 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.470836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.470895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.499441 1078428 cri.go:89] found id: ""
	I1210 07:53:12.499467 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.499476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.499483 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.499561 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.524188 1078428 cri.go:89] found id: ""
	I1210 07:53:12.524211 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.524219 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.524225 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.524285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.550501 1078428 cri.go:89] found id: ""
	I1210 07:53:12.550528 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.550537 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.550543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.550617 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.578576 1078428 cri.go:89] found id: ""
	I1210 07:53:12.578602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.578611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.578616 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:12.578687 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:12.612078 1078428 cri.go:89] found id: ""
	I1210 07:53:12.612113 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.612122 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:12.612132 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.612144 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:12.645096 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.645125 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.700179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.700217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.715578 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.715606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.781369 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.781391 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:12.781403 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:53:11.554633 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:14.054508 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:15.306176 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:15.317232 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:15.317315 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:15.336640 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:53:15.353595 1078428 cri.go:89] found id: ""
	I1210 07:53:15.353626 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.353635 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:15.353642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:15.353703 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1210 07:53:15.421893 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:15.421994 1078428 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:15.422157 1078428 cri.go:89] found id: ""
	I1210 07:53:15.422177 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.422185 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:15.422192 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:15.422270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:15.447660 1078428 cri.go:89] found id: ""
	I1210 07:53:15.447684 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.447693 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:15.447699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:15.447763 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:15.471893 1078428 cri.go:89] found id: ""
	I1210 07:53:15.471918 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.471927 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:15.471934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:15.472003 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.496880 1078428 cri.go:89] found id: ""
	I1210 07:53:15.496915 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.496924 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.496930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.496999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.525007 1078428 cri.go:89] found id: ""
	I1210 07:53:15.525043 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.525055 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.525061 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.525138 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.556732 1078428 cri.go:89] found id: ""
	I1210 07:53:15.556776 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.556785 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.556792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:15.556864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:15.592802 1078428 cri.go:89] found id: ""
	I1210 07:53:15.592835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.592844 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:15.592854 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.592866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.660809 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.660846 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.677009 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.677040 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.743204 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.743227 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:15.743239 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:15.768020 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.768053 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:18.297028 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:18.310128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:18.310198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:18.340476 1078428 cri.go:89] found id: ""
	I1210 07:53:18.340572 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.340599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:18.340642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:18.340769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:18.369516 1078428 cri.go:89] found id: ""
	I1210 07:53:18.369582 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.369614 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:18.369633 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:18.369753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:18.396295 1078428 cri.go:89] found id: ""
	I1210 07:53:18.396321 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.396330 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:18.396336 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:18.396428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:18.422012 1078428 cri.go:89] found id: ""
	I1210 07:53:18.422037 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.422046 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:18.422052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:18.422164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:18.446495 1078428 cri.go:89] found id: ""
	I1210 07:53:18.446518 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.446526 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:18.446532 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:18.446600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:18.471650 1078428 cri.go:89] found id: ""
	I1210 07:53:18.471674 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.471682 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:18.471688 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:18.471779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.495591 1078428 cri.go:89] found id: ""
	I1210 07:53:18.495616 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.495624 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.495631 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:18.495694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:18.523464 1078428 cri.go:89] found id: ""
	I1210 07:53:18.523489 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.523497 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:18.523506 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.523518 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.585434 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.585481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.610315 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.610344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.674572 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.674593 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:18.674607 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:18.699401 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.699435 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:19.389521 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:53:19.452005 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:19.452105 1078428 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:19.455408 1078428 out.go:179] * Enabled addons: 
	I1210 07:53:19.458237 1078428 addons.go:530] duration metric: took 1m57.316864384s for enable addons: enabled=[]
	W1210 07:53:16.054718 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:18.554815 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:21.227168 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:21.237506 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:21.237577 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:21.261812 1078428 cri.go:89] found id: ""
	I1210 07:53:21.261842 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.261852 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:21.261858 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:21.261921 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:21.289741 1078428 cri.go:89] found id: ""
	I1210 07:53:21.289767 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.289787 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:21.289794 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:21.289855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:21.331373 1078428 cri.go:89] found id: ""
	I1210 07:53:21.331400 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.331410 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:21.331415 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:21.331534 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:21.364401 1078428 cri.go:89] found id: ""
	I1210 07:53:21.364427 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.364436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:21.364443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:21.364504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:21.395936 1078428 cri.go:89] found id: ""
	I1210 07:53:21.395965 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.395975 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:21.395981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:21.396044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:21.420965 1078428 cri.go:89] found id: ""
	I1210 07:53:21.420996 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.421005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:21.421012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:21.421073 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:21.446318 1078428 cri.go:89] found id: ""
	I1210 07:53:21.446345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.446354 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:21.446360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:21.446422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:21.475470 1078428 cri.go:89] found id: ""
	I1210 07:53:21.475499 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.475509 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:21.475521 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:21.475537 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.530313 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.530354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.548651 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.548737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.632055 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.632137 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:21.632157 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:21.659428 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.659466 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.192421 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:24.203056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:24.203137 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:24.232457 1078428 cri.go:89] found id: ""
	I1210 07:53:24.232493 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.232502 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:24.232509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:24.232576 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:24.260730 1078428 cri.go:89] found id: ""
	I1210 07:53:24.260758 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.260768 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:24.260774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:24.260837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:24.284981 1078428 cri.go:89] found id: ""
	I1210 07:53:24.285009 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.285018 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:24.285024 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:24.285086 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:24.316578 1078428 cri.go:89] found id: ""
	I1210 07:53:24.316604 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.316613 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:24.316619 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:24.316678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:24.353587 1078428 cri.go:89] found id: ""
	I1210 07:53:24.353622 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.353638 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:24.353645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:24.353740 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:24.384460 1078428 cri.go:89] found id: ""
	I1210 07:53:24.384483 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.384492 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:24.384498 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:24.384562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:24.414252 1078428 cri.go:89] found id: ""
	I1210 07:53:24.414280 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.414290 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:24.414296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:24.414361 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:24.442225 1078428 cri.go:89] found id: ""
	I1210 07:53:24.442247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.442256 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:24.442265 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:24.442276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:24.467596 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.467629 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:21.054852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:23.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:24.499949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.499977 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.558185 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.558223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:24.576232 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:24.576264 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:24.646699 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.148382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:27.158984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:27.159102 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:27.183857 1078428 cri.go:89] found id: ""
	I1210 07:53:27.183927 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.183943 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:27.183951 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:27.184028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:27.207461 1078428 cri.go:89] found id: ""
	I1210 07:53:27.207529 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.207554 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:27.207568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:27.207645 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:27.234849 1078428 cri.go:89] found id: ""
	I1210 07:53:27.234876 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.234884 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:27.234890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:27.234948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:27.258887 1078428 cri.go:89] found id: ""
	I1210 07:53:27.258910 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.258919 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:27.258926 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:27.258983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:27.283113 1078428 cri.go:89] found id: ""
	I1210 07:53:27.283189 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.283206 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:27.283214 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:27.283283 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:27.324968 1078428 cri.go:89] found id: ""
	I1210 07:53:27.324994 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.325004 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:27.325010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:27.325070 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:27.355711 1078428 cri.go:89] found id: ""
	I1210 07:53:27.355739 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.355749 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:27.355755 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:27.355817 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:27.383387 1078428 cri.go:89] found id: ""
	I1210 07:53:27.383424 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.383435 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:27.383445 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:27.383456 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:27.408324 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:27.408363 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:27.438348 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:27.438424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:27.496282 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:27.496317 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:27.512354 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:27.512385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.586988 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:53:26.054246 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:28.554092 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:30.088030 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:30.100373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:30.100449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:30.127922 1078428 cri.go:89] found id: ""
	I1210 07:53:30.127998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.128023 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:30.128041 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:30.128120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:30.160672 1078428 cri.go:89] found id: ""
	I1210 07:53:30.160699 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.160709 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:30.160722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:30.160784 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:30.186050 1078428 cri.go:89] found id: ""
	I1210 07:53:30.186077 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.186086 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:30.186093 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:30.186157 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:30.211107 1078428 cri.go:89] found id: ""
	I1210 07:53:30.211132 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.211141 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:30.211147 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:30.211213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:30.235571 1078428 cri.go:89] found id: ""
	I1210 07:53:30.235598 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.235608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:30.235615 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:30.235678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:30.264308 1078428 cri.go:89] found id: ""
	I1210 07:53:30.264331 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.264339 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:30.264346 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:30.264413 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:30.288489 1078428 cri.go:89] found id: ""
	I1210 07:53:30.288557 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.288581 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:30.288594 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:30.288673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:30.318600 1078428 cri.go:89] found id: ""
	I1210 07:53:30.318628 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.318638 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:30.318648 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:30.318679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:30.359074 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:30.359103 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.417146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.417182 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:30.432931 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:30.432960 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:30.497452 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:30.497474 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:30.497487 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.027579 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:33.038128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:33.038197 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:33.063535 1078428 cri.go:89] found id: ""
	I1210 07:53:33.063560 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.063572 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:33.063578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:33.063642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:33.087384 1078428 cri.go:89] found id: ""
	I1210 07:53:33.087406 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.087414 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:33.087420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:33.087478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:33.112186 1078428 cri.go:89] found id: ""
	I1210 07:53:33.112247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.112258 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:33.112265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:33.112326 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:33.136102 1078428 cri.go:89] found id: ""
	I1210 07:53:33.136125 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.136133 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:33.136139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:33.136202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:33.160865 1078428 cri.go:89] found id: ""
	I1210 07:53:33.160931 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.160957 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:33.160986 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:33.161071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:33.185964 1078428 cri.go:89] found id: ""
	I1210 07:53:33.186031 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.186054 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:33.186075 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:33.186150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:33.211060 1078428 cri.go:89] found id: ""
	I1210 07:53:33.211086 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.211095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:33.211100 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:33.211180 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:33.236111 1078428 cri.go:89] found id: ""
	I1210 07:53:33.236180 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.236213 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:33.236227 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.236251 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:33.252003 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:33.252029 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:33.315902 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:33.315967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:33.316003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.342524 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:33.342604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:33.377391 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:33.377419 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:30.554186 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:33.054061 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:35.054801 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:35.933860 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.945070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.945142 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.971394 1078428 cri.go:89] found id: ""
	I1210 07:53:35.971423 1078428 logs.go:282] 0 containers: []
	W1210 07:53:35.971432 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.971438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.971501 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:36.005170 1078428 cri.go:89] found id: ""
	I1210 07:53:36.005227 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.005240 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:36.005248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:36.005329 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:36.035275 1078428 cri.go:89] found id: ""
	I1210 07:53:36.035299 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.035307 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:36.035313 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:36.035380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:36.060232 1078428 cri.go:89] found id: ""
	I1210 07:53:36.060255 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.060266 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:36.060272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:36.060336 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:36.084825 1078428 cri.go:89] found id: ""
	I1210 07:53:36.084850 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.084859 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:36.084866 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:36.084955 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:36.110606 1078428 cri.go:89] found id: ""
	I1210 07:53:36.110630 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.110639 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:36.110664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:36.110728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:36.139205 1078428 cri.go:89] found id: ""
	I1210 07:53:36.139232 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.139241 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:36.139248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:36.139358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:36.165255 1078428 cri.go:89] found id: ""
	I1210 07:53:36.165279 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.165287 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:36.165296 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:36.165308 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:36.190967 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:36.191003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:36.228036 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:36.228070 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:36.283588 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:36.283626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:36.308631 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:36.308660 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:36.382721 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.882925 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.893611 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.893738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.919385 1078428 cri.go:89] found id: ""
	I1210 07:53:38.919418 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.919427 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.919433 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.919504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.943787 1078428 cri.go:89] found id: ""
	I1210 07:53:38.943814 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.943824 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.943832 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.943896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.968361 1078428 cri.go:89] found id: ""
	I1210 07:53:38.968433 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.968451 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.968458 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.968520 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.995636 1078428 cri.go:89] found id: ""
	I1210 07:53:38.995661 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.995670 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.995677 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.995754 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:39.021416 1078428 cri.go:89] found id: ""
	I1210 07:53:39.021452 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.021462 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:39.021470 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:39.021552 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:39.048415 1078428 cri.go:89] found id: ""
	I1210 07:53:39.048441 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.048450 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:39.048456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:39.048545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:39.074528 1078428 cri.go:89] found id: ""
	I1210 07:53:39.074554 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.074563 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:39.074569 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:39.074633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:39.099525 1078428 cri.go:89] found id: ""
	I1210 07:53:39.099551 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.099571 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:39.099581 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:39.099594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:39.166056 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:39.166080 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:39.166094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:39.191445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:39.191482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:39.221901 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:39.221931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:39.276698 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:39.276735 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:53:37.554212 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:40.054014 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:41.793231 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.806351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.806419 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.833486 1078428 cri.go:89] found id: ""
	I1210 07:53:41.833508 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.833517 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.833523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.833587 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.863627 1078428 cri.go:89] found id: ""
	I1210 07:53:41.863650 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.863659 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.863665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.863723 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.891468 1078428 cri.go:89] found id: ""
	I1210 07:53:41.891492 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.891502 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.891509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.891575 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.916517 1078428 cri.go:89] found id: ""
	I1210 07:53:41.916542 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.916550 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.916557 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.916616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.942528 1078428 cri.go:89] found id: ""
	I1210 07:53:41.942555 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.942577 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.942584 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.942646 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.966600 1078428 cri.go:89] found id: ""
	I1210 07:53:41.966624 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.966633 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.966639 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.966707 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.990797 1078428 cri.go:89] found id: ""
	I1210 07:53:41.990831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.990840 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.990846 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:41.990914 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:42.024121 1078428 cri.go:89] found id: ""
	I1210 07:53:42.024148 1078428 logs.go:282] 0 containers: []
	W1210 07:53:42.024158 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:42.024169 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:42.024181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:42.080753 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:42.080799 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:42.098930 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:42.098965 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:42.176005 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:42.176075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:42.176108 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:42.205998 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:42.206045 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:42.054513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:44.553993 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:44.740690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.751788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.751908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.777536 1078428 cri.go:89] found id: ""
	I1210 07:53:44.777563 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.777571 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.777578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.777640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.805133 1078428 cri.go:89] found id: ""
	I1210 07:53:44.805161 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.805170 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.805176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.805237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.842340 1078428 cri.go:89] found id: ""
	I1210 07:53:44.842368 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.842383 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.842390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.842451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.875009 1078428 cri.go:89] found id: ""
	I1210 07:53:44.875035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.875044 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.875050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.875144 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.900854 1078428 cri.go:89] found id: ""
	I1210 07:53:44.900880 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.900889 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.900895 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.900993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.926168 1078428 cri.go:89] found id: ""
	I1210 07:53:44.926194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.926203 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.926210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.926302 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.951565 1078428 cri.go:89] found id: ""
	I1210 07:53:44.951590 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.951599 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.951605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:44.951700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:44.981123 1078428 cri.go:89] found id: ""
	I1210 07:53:44.981151 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.981160 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:44.981170 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.981181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:45.061176 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:45.061213 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:45.061227 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:45.119245 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:45.119283 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:45.172398 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:45.172430 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:45.255583 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:45.255726 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.779428 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.790537 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.790611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.831579 1078428 cri.go:89] found id: ""
	I1210 07:53:47.831602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.831610 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.831617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.831677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.859808 1078428 cri.go:89] found id: ""
	I1210 07:53:47.859835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.859844 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.859850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.859916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.885720 1078428 cri.go:89] found id: ""
	I1210 07:53:47.885745 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.885754 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.885761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.885829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.910568 1078428 cri.go:89] found id: ""
	I1210 07:53:47.910594 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.910604 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.910610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.910668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.934447 1078428 cri.go:89] found id: ""
	I1210 07:53:47.934495 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.934505 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.934511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.934571 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.959745 1078428 cri.go:89] found id: ""
	I1210 07:53:47.959772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.959782 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.959788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.959871 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.984059 1078428 cri.go:89] found id: ""
	I1210 07:53:47.984085 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.984095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.984102 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:47.984163 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:48.011978 1078428 cri.go:89] found id: ""
	I1210 07:53:48.012007 1078428 logs.go:282] 0 containers: []
	W1210 07:53:48.012018 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:48.012030 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:48.012043 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:48.069700 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:48.069738 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:48.086303 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:48.086345 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:48.160973 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:48.160994 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:48.161008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:48.185832 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:48.185868 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:46.554777 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:49.054179 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:50.713469 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.724372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.724452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.750268 1078428 cri.go:89] found id: ""
	I1210 07:53:50.750292 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.750300 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.750306 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.750368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.776624 1078428 cri.go:89] found id: ""
	I1210 07:53:50.776689 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.776704 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.776711 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.776769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.807024 1078428 cri.go:89] found id: ""
	I1210 07:53:50.807051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.807060 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.807070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.807127 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.851753 1078428 cri.go:89] found id: ""
	I1210 07:53:50.851831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.851855 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.851879 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.852000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.878419 1078428 cri.go:89] found id: ""
	I1210 07:53:50.878571 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.878589 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.878597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.878667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.904710 1078428 cri.go:89] found id: ""
	I1210 07:53:50.904741 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.904750 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.904756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.904819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.929368 1078428 cri.go:89] found id: ""
	I1210 07:53:50.929398 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.929421 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.929428 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:50.929495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:50.956973 1078428 cri.go:89] found id: ""
	I1210 07:53:50.956998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.957006 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:50.957016 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:50.957028 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:50.982743 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.982778 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:51.015675 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:51.015706 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:51.072656 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:51.072697 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:51.089028 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:51.089115 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:51.156089 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.657305 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.668282 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.668364 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.693314 1078428 cri.go:89] found id: ""
	I1210 07:53:53.693340 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.693349 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.693356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.693417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.718128 1078428 cri.go:89] found id: ""
	I1210 07:53:53.718154 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.718169 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.718176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.718234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.744359 1078428 cri.go:89] found id: ""
	I1210 07:53:53.744397 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.744406 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.744412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.744485 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.773658 1078428 cri.go:89] found id: ""
	I1210 07:53:53.773737 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.773760 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.773782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.773879 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.804702 1078428 cri.go:89] found id: ""
	I1210 07:53:53.804772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.804796 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.804815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.804905 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.840639 1078428 cri.go:89] found id: ""
	I1210 07:53:53.840706 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.840730 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.840753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.840846 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.869303 1078428 cri.go:89] found id: ""
	I1210 07:53:53.869373 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.869397 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.869419 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:53.869508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:53.898651 1078428 cri.go:89] found id: ""
	I1210 07:53:53.898742 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.898764 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:53.898787 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:53.898821 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:53.924144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.924181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.953086 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.953118 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:54.008451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:54.008555 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:54.027281 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:54.027312 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:54.091065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:53:51.054819 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:53.554121 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:56.591259 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.602391 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.602493 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.627566 1078428 cri.go:89] found id: ""
	I1210 07:53:56.627597 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.627607 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.627614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.627677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.654900 1078428 cri.go:89] found id: ""
	I1210 07:53:56.654928 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.654937 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.654944 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.655007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.679562 1078428 cri.go:89] found id: ""
	I1210 07:53:56.679592 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.679606 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.679612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.679737 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.703320 1078428 cri.go:89] found id: ""
	I1210 07:53:56.703345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.703355 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.703361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.703420 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.731538 1078428 cri.go:89] found id: ""
	I1210 07:53:56.731564 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.731573 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.731579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.731664 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.756416 1078428 cri.go:89] found id: ""
	I1210 07:53:56.756442 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.756451 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.756457 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.756523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.785074 1078428 cri.go:89] found id: ""
	I1210 07:53:56.785097 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.785106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.785111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:56.785171 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:56.815793 1078428 cri.go:89] found id: ""
	I1210 07:53:56.815821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.815831 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:56.815842 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.815856 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.834351 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.834380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.907823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.907857 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:56.907871 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:56.933197 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.933233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.964346 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.964378 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:55.554659 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:58.054078 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:00.054143 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:59.520946 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.531324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.531414 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.563870 1078428 cri.go:89] found id: ""
	I1210 07:53:59.563897 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.563907 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.563913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.564000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.593355 1078428 cri.go:89] found id: ""
	I1210 07:53:59.593385 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.593394 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.593400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.593468 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.620235 1078428 cri.go:89] found id: ""
	I1210 07:53:59.620263 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.620272 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.620278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.620338 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.645074 1078428 cri.go:89] found id: ""
	I1210 07:53:59.645099 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.645108 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.645114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.645178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.673804 1078428 cri.go:89] found id: ""
	I1210 07:53:59.673830 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.673839 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.673845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.673902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.697766 1078428 cri.go:89] found id: ""
	I1210 07:53:59.697793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.697803 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.697810 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.697868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.725582 1078428 cri.go:89] found id: ""
	I1210 07:53:59.725608 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.725617 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.725623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:59.725681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:59.750402 1078428 cri.go:89] found id: ""
	I1210 07:53:59.750428 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.750437 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:59.750447 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:59.750458 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:59.775346 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.775383 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.815776 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.815804 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.876120 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.876164 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.897440 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.897470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.962486 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.463154 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.473950 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.474039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.498884 1078428 cri.go:89] found id: ""
	I1210 07:54:02.498907 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.498916 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.498923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.498982 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.523553 1078428 cri.go:89] found id: ""
	I1210 07:54:02.523582 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.523591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.523597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.552876 1078428 cri.go:89] found id: ""
	I1210 07:54:02.552902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.552911 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.552918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.552976 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.583793 1078428 cri.go:89] found id: ""
	I1210 07:54:02.583818 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.583827 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.583833 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.583895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.625932 1078428 cri.go:89] found id: ""
	I1210 07:54:02.625959 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.625969 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.625976 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.626044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.652709 1078428 cri.go:89] found id: ""
	I1210 07:54:02.652784 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.652800 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.652808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.652868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.680830 1078428 cri.go:89] found id: ""
	I1210 07:54:02.680859 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.680868 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.680874 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:02.680933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:02.706663 1078428 cri.go:89] found id: ""
	I1210 07:54:02.706687 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.706696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:02.706704 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.706715 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.763069 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.763105 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.779309 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.779340 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.864302 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.864326 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:02.864339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:02.890235 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.890274 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:02.554570 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:04.555006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:05.418128 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.429523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.429604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.456726 1078428 cri.go:89] found id: ""
	I1210 07:54:05.456755 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.456765 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.456772 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.456851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.485039 1078428 cri.go:89] found id: ""
	I1210 07:54:05.485065 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.485074 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.485080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.485169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.510634 1078428 cri.go:89] found id: ""
	I1210 07:54:05.510658 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.510668 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.510674 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.510733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.536710 1078428 cri.go:89] found id: ""
	I1210 07:54:05.536743 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.536753 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.536760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.536848 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.568911 1078428 cri.go:89] found id: ""
	I1210 07:54:05.568991 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.569015 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.569040 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.569150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.598888 1078428 cri.go:89] found id: ""
	I1210 07:54:05.598964 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.598987 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.599007 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.599101 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.630665 1078428 cri.go:89] found id: ""
	I1210 07:54:05.630741 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.630771 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.630779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:05.630850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:05.654676 1078428 cri.go:89] found id: ""
	I1210 07:54:05.654702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.654712 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:05.654722 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.654733 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.712685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.712722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.728743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.728774 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.807287 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.807311 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:05.807325 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:05.835209 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.835246 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.367017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.377830 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.377904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.402753 1078428 cri.go:89] found id: ""
	I1210 07:54:08.402778 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.402787 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.402795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.402856 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.427920 1078428 cri.go:89] found id: ""
	I1210 07:54:08.427947 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.427956 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.427963 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.428021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.453012 1078428 cri.go:89] found id: ""
	I1210 07:54:08.453037 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.453045 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.453052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.453114 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.477565 1078428 cri.go:89] found id: ""
	I1210 07:54:08.477591 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.477606 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.477612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.477673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.501669 1078428 cri.go:89] found id: ""
	I1210 07:54:08.501694 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.501740 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.501750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.501816 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.530594 1078428 cri.go:89] found id: ""
	I1210 07:54:08.530667 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.530704 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.530719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.530799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.561145 1078428 cri.go:89] found id: ""
	I1210 07:54:08.561171 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.561179 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.561186 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:08.561244 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:08.595663 1078428 cri.go:89] found id: ""
	I1210 07:54:08.595686 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.595695 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:08.595706 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:08.595718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:08.622963 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.623002 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.652801 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.652829 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.708272 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.708307 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.724144 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.724174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.790000 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:07.054035 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:09.054348 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:11.291584 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.302037 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.302111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.331607 1078428 cri.go:89] found id: ""
	I1210 07:54:11.331631 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.331640 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.331646 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.331711 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.355008 1078428 cri.go:89] found id: ""
	I1210 07:54:11.355031 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.355039 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.355045 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.355104 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.380347 1078428 cri.go:89] found id: ""
	I1210 07:54:11.380423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.380463 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.380485 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.380572 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.410797 1078428 cri.go:89] found id: ""
	I1210 07:54:11.410824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.410834 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.410840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.410898 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.435927 1078428 cri.go:89] found id: ""
	I1210 07:54:11.435996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.436021 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.436035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.436109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.461484 1078428 cri.go:89] found id: ""
	I1210 07:54:11.461520 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.461529 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.461536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.461603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.486793 1078428 cri.go:89] found id: ""
	I1210 07:54:11.486817 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.486825 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.486831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:11.486890 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:11.515338 1078428 cri.go:89] found id: ""
	I1210 07:54:11.515364 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.515374 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:11.515384 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.515396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.593473 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.593495 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:11.593509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:11.619492 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.619523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.646739 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.646771 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.701149 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.701187 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.217342 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:14.228228 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:14.228306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.254323 1078428 cri.go:89] found id: ""
	I1210 07:54:14.254360 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.254369 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.254375 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.254443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.279268 1078428 cri.go:89] found id: ""
	I1210 07:54:14.279295 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.279303 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.279310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.279397 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.304531 1078428 cri.go:89] found id: ""
	I1210 07:54:14.304558 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.304567 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.304574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.304647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.329458 1078428 cri.go:89] found id: ""
	I1210 07:54:14.329487 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.329496 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.329502 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.329563 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.359168 1078428 cri.go:89] found id: ""
	I1210 07:54:14.359241 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.359258 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.359266 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.359348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.386391 1078428 cri.go:89] found id: ""
	I1210 07:54:14.386426 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.386435 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.386442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.386540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.411808 1078428 cri.go:89] found id: ""
	I1210 07:54:14.411843 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.411862 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.411870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:14.411946 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:14.440262 1078428 cri.go:89] found id: ""
	I1210 07:54:14.440292 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.440301 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:14.440311 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.440322 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:54:11.553952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:13.554999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:14.496340 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.496376 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.512934 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.512963 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.584969 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.585042 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:14.585069 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:14.615045 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.615086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.146612 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:17.157236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:17.157307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:17.184080 1078428 cri.go:89] found id: ""
	I1210 07:54:17.184102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.184111 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:17.184117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:17.184177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:17.212720 1078428 cri.go:89] found id: ""
	I1210 07:54:17.212745 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.212754 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:17.212760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:17.212822 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.238495 1078428 cri.go:89] found id: ""
	I1210 07:54:17.238521 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.238529 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.238542 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.238603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.262892 1078428 cri.go:89] found id: ""
	I1210 07:54:17.262921 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.262930 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.262936 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.262996 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.291473 1078428 cri.go:89] found id: ""
	I1210 07:54:17.291498 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.291508 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.291514 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.291573 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.317108 1078428 cri.go:89] found id: ""
	I1210 07:54:17.317133 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.317142 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.317149 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.317209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.344918 1078428 cri.go:89] found id: ""
	I1210 07:54:17.344944 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.344953 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.344959 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:17.345019 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:17.370082 1078428 cri.go:89] found id: ""
	I1210 07:54:17.370109 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.370118 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:17.370128 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.370139 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:17.427357 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.427407 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.443363 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.443393 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.509516 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.509538 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:17.509551 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:17.535043 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.535078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:16.053965 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:18.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:20.071194 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:20.083928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:20.084059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:20.119958 1078428 cri.go:89] found id: ""
	I1210 07:54:20.119987 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.119996 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:20.120002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:20.120062 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:20.144861 1078428 cri.go:89] found id: ""
	I1210 07:54:20.144883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.144891 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:20.144897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:20.144957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:20.180042 1078428 cri.go:89] found id: ""
	I1210 07:54:20.180069 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.180078 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:20.180085 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:20.180151 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.208390 1078428 cri.go:89] found id: ""
	I1210 07:54:20.208423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.208432 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.208439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.208511 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.234337 1078428 cri.go:89] found id: ""
	I1210 07:54:20.234358 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.234367 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.234373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.234441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.263116 1078428 cri.go:89] found id: ""
	I1210 07:54:20.263138 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.263146 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.263153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.263213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.287115 1078428 cri.go:89] found id: ""
	I1210 07:54:20.287188 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.287203 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.287210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:20.287281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:20.312391 1078428 cri.go:89] found id: ""
	I1210 07:54:20.312415 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.312423 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:20.312432 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.312443 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.369802 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.369838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.387018 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.387099 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.458731 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.458801 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:20.458828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:20.483627 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.483662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:23.014658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:23.025123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:23.025235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:23.060798 1078428 cri.go:89] found id: ""
	I1210 07:54:23.060872 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.060909 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:23.060934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:23.061025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:23.092890 1078428 cri.go:89] found id: ""
	I1210 07:54:23.092965 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.092987 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:23.093018 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:23.093129 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:23.122215 1078428 cri.go:89] found id: ""
	I1210 07:54:23.122290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.122314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:23.122335 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:23.122418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:23.147080 1078428 cri.go:89] found id: ""
	I1210 07:54:23.147108 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.147117 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:23.147123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:23.147213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.171020 1078428 cri.go:89] found id: ""
	I1210 07:54:23.171043 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.171052 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.171064 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.171120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.195821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.195889 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.195914 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.195929 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.196016 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.219821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.219901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.219926 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.219941 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:23.220025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:23.248052 1078428 cri.go:89] found id: ""
	I1210 07:54:23.248079 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.248088 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:23.248098 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.248109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.305179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.305215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.321081 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.321111 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.391528 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.391553 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:23.391565 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:23.416476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.416509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:20.554048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:22.554698 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:24.554805 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:25.951859 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.962115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.962185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.986216 1078428 cri.go:89] found id: ""
	I1210 07:54:25.986286 1078428 logs.go:282] 0 containers: []
	W1210 07:54:25.986310 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.986334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.986426 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:26.011668 1078428 cri.go:89] found id: ""
	I1210 07:54:26.011696 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.011705 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:26.011712 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:26.011773 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:26.037538 1078428 cri.go:89] found id: ""
	I1210 07:54:26.037560 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.037569 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:26.037575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:26.037634 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:26.066974 1078428 cri.go:89] found id: ""
	I1210 07:54:26.066996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.067006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:26.067013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:26.067071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:26.100870 1078428 cri.go:89] found id: ""
	I1210 07:54:26.100892 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.100901 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:26.100907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:26.100966 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.130861 1078428 cri.go:89] found id: ""
	I1210 07:54:26.130883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.130891 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.130897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.130957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.156407 1078428 cri.go:89] found id: ""
	I1210 07:54:26.156429 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.156438 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.156444 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:26.156502 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:26.182081 1078428 cri.go:89] found id: ""
	I1210 07:54:26.182102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.182110 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:26.182119 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.182133 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.239878 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.239917 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.259189 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.259219 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.328449 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.328475 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:26.328490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:26.353246 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.353278 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.882607 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.893420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.893495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.917577 1078428 cri.go:89] found id: ""
	I1210 07:54:28.917603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.917611 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.917617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.917677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.949094 1078428 cri.go:89] found id: ""
	I1210 07:54:28.949123 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.949132 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.949138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.949202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.976683 1078428 cri.go:89] found id: ""
	I1210 07:54:28.976708 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.976716 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.976722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.976783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:29.001326 1078428 cri.go:89] found id: ""
	I1210 07:54:29.001395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.001420 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:29.001440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:29.001526 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:29.026870 1078428 cri.go:89] found id: ""
	I1210 07:54:29.026894 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.026903 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:29.026909 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:29.026992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:29.059072 1078428 cri.go:89] found id: ""
	I1210 07:54:29.059106 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.059115 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:29.059122 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:29.059190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:29.089329 1078428 cri.go:89] found id: ""
	I1210 07:54:29.089363 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.089372 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:29.089379 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:29.089446 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:29.116648 1078428 cri.go:89] found id: ""
	I1210 07:54:29.116671 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.116680 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:29.116689 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:29.116701 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:29.141429 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.141465 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.168073 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.168102 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:29.223128 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:29.223165 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:29.239118 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:29.239149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.304306 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:27.054859 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:29.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:31.805827 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.819227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.819305 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.852872 1078428 cri.go:89] found id: ""
	I1210 07:54:31.852901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.852910 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.852916 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.852973 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.881145 1078428 cri.go:89] found id: ""
	I1210 07:54:31.881173 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.881182 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.881188 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.881249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.907195 1078428 cri.go:89] found id: ""
	I1210 07:54:31.907218 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.907227 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.907233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.907292 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.931775 1078428 cri.go:89] found id: ""
	I1210 07:54:31.931799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.931808 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.931814 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.931876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.957735 1078428 cri.go:89] found id: ""
	I1210 07:54:31.957764 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.957772 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.957779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.957837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.982202 1078428 cri.go:89] found id: ""
	I1210 07:54:31.982285 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.982308 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.982334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.982441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:32.011091 1078428 cri.go:89] found id: ""
	I1210 07:54:32.011119 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.011129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:32.011138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:32.011205 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:32.039293 1078428 cri.go:89] found id: ""
	I1210 07:54:32.039371 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.039388 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:32.039399 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:32.039410 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:32.067441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.067482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:32.105238 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:32.105273 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.164873 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.164913 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.181394 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.181477 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.250195 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:32.054006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:34.054566 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:34.751129 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.761490 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.761559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.785680 1078428 cri.go:89] found id: ""
	I1210 07:54:34.785702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.785711 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.785716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.785775 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.820785 1078428 cri.go:89] found id: ""
	I1210 07:54:34.820809 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.820817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.820823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.820892 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.852508 1078428 cri.go:89] found id: ""
	I1210 07:54:34.852531 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.852539 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.852545 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.852604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.879064 1078428 cri.go:89] found id: ""
	I1210 07:54:34.879095 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.879104 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.879111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.879179 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.908815 1078428 cri.go:89] found id: ""
	I1210 07:54:34.908849 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.908858 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.908864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.908933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.939793 1078428 cri.go:89] found id: ""
	I1210 07:54:34.939820 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.939831 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.939838 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.939902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.966660 1078428 cri.go:89] found id: ""
	I1210 07:54:34.966730 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.966754 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.966775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:34.966877 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:34.997175 1078428 cri.go:89] found id: ""
	I1210 07:54:34.997202 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.997211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:34.997221 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:34.997233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:35.054362 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:35.054504 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:35.071310 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:35.071339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.154263 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.154285 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:35.154298 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:35.184377 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.184427 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:37.716479 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.727384 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.727475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.758151 1078428 cri.go:89] found id: ""
	I1210 07:54:37.758175 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.758183 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.758189 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.758249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.783547 1078428 cri.go:89] found id: ""
	I1210 07:54:37.783572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.783580 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.783586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.783652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.824269 1078428 cri.go:89] found id: ""
	I1210 07:54:37.824302 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.824320 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.824326 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.824392 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.859292 1078428 cri.go:89] found id: ""
	I1210 07:54:37.859315 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.859324 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.859332 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.859391 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.887370 1078428 cri.go:89] found id: ""
	I1210 07:54:37.887395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.887404 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.887411 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.887471 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.912568 1078428 cri.go:89] found id: ""
	I1210 07:54:37.912590 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.912599 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.912605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.912667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.942226 1078428 cri.go:89] found id: ""
	I1210 07:54:37.942294 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.942321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.942341 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:37.942416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:37.967116 1078428 cri.go:89] found id: ""
	I1210 07:54:37.967186 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.967211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:37.967234 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:37.967261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.026081 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.026123 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:38.044051 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:38.044086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:38.137383 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:38.137408 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:38.137420 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:38.163137 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.163174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:36.553998 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:38.554925 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:40.692712 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.705786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:40.705862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:40.730857 1078428 cri.go:89] found id: ""
	I1210 07:54:40.730881 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.730890 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:40.730896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:40.730956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:40.759374 1078428 cri.go:89] found id: ""
	I1210 07:54:40.759401 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.759410 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:40.759417 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:40.759481 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:40.784874 1078428 cri.go:89] found id: ""
	I1210 07:54:40.784898 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.784906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:40.784912 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:40.784972 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:40.829615 1078428 cri.go:89] found id: ""
	I1210 07:54:40.829638 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.829648 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:40.829655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:40.829714 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:40.855514 1078428 cri.go:89] found id: ""
	I1210 07:54:40.855537 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.855547 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:40.855553 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:40.855622 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:40.880645 1078428 cri.go:89] found id: ""
	I1210 07:54:40.880674 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.880683 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:40.880699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:40.880762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:40.908526 1078428 cri.go:89] found id: ""
	I1210 07:54:40.908553 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.908562 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:40.908568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:40.908627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:40.933389 1078428 cri.go:89] found id: ""
	I1210 07:54:40.933417 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.933427 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:40.933466 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:40.933485 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:40.989429 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:40.989508 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:41.005657 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:41.005748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:41.093001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:41.093075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:41.093107 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:41.120941 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:41.121022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:43.650332 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:43.660886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:43.660957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:43.685546 1078428 cri.go:89] found id: ""
	I1210 07:54:43.685572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.685582 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:43.685590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:43.685652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:43.710551 1078428 cri.go:89] found id: ""
	I1210 07:54:43.710575 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.710584 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:43.710590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:43.710651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:43.735321 1078428 cri.go:89] found id: ""
	I1210 07:54:43.735347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.735357 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:43.735363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:43.735422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:43.760265 1078428 cri.go:89] found id: ""
	I1210 07:54:43.760290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.760299 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:43.760305 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:43.760371 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:43.785386 1078428 cri.go:89] found id: ""
	I1210 07:54:43.785412 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.785421 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:43.785427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:43.785491 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:43.812278 1078428 cri.go:89] found id: ""
	I1210 07:54:43.812305 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.812323 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:43.812331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:43.812390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:43.844260 1078428 cri.go:89] found id: ""
	I1210 07:54:43.844288 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.844297 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:43.844303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:43.844374 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:43.878456 1078428 cri.go:89] found id: ""
	I1210 07:54:43.878503 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.878512 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:43.878522 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:43.878533 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:43.934467 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:43.934503 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:43.951761 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:43.951790 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:44.019672 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:44.019739 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:44.019764 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:44.045374 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:44.045448 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:41.053999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:43.054974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:45.055139 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:46.583553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:46.594544 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:46.594614 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:46.620989 1078428 cri.go:89] found id: ""
	I1210 07:54:46.621016 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.621026 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:46.621032 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:46.621092 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:46.646885 1078428 cri.go:89] found id: ""
	I1210 07:54:46.646912 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.646921 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:46.646927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:46.646993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:46.671522 1078428 cri.go:89] found id: ""
	I1210 07:54:46.671545 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.671555 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:46.671561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:46.671627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:46.697035 1078428 cri.go:89] found id: ""
	I1210 07:54:46.697057 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.697066 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:46.697076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:46.697135 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:46.721985 1078428 cri.go:89] found id: ""
	I1210 07:54:46.722008 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.722016 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:46.722023 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:46.722081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:46.750862 1078428 cri.go:89] found id: ""
	I1210 07:54:46.750885 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.750894 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:46.750900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:46.750957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:46.775321 1078428 cri.go:89] found id: ""
	I1210 07:54:46.775347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.775357 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:46.775363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:46.775422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:46.804576 1078428 cri.go:89] found id: ""
	I1210 07:54:46.804603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.804612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:46.804624 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:46.804635 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:46.869024 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:46.869059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:46.887039 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:46.887068 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:46.955257 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:46.955281 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:46.955294 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:46.981722 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:46.981766 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:47.553929 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:49.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:49.512895 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:49.523585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:49.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:49.553762 1078428 cri.go:89] found id: ""
	I1210 07:54:49.553799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.553809 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:49.553815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:49.553883 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:49.584365 1078428 cri.go:89] found id: ""
	I1210 07:54:49.584397 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.584406 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:49.584412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:49.584473 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:49.609054 1078428 cri.go:89] found id: ""
	I1210 07:54:49.609078 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.609088 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:49.609094 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:49.609153 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:49.633506 1078428 cri.go:89] found id: ""
	I1210 07:54:49.633585 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.633612 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:49.633632 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:49.633727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:49.660681 1078428 cri.go:89] found id: ""
	I1210 07:54:49.660705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.660713 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:49.660719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:49.660779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:49.684429 1078428 cri.go:89] found id: ""
	I1210 07:54:49.684456 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.684465 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:49.684472 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:49.684559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:49.708792 1078428 cri.go:89] found id: ""
	I1210 07:54:49.708825 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.708834 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:49.708841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:49.708907 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:49.733028 1078428 cri.go:89] found id: ""
	I1210 07:54:49.733061 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.733070 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:49.733080 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:49.733093 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:49.788419 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:49.788454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:49.806199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:49.806229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:49.890193 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:49.890216 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:49.890229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:49.916164 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:49.916201 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.445192 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:52.455938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:52.456011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:52.483578 1078428 cri.go:89] found id: ""
	I1210 07:54:52.483607 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.483615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:52.483622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:52.483681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:52.508996 1078428 cri.go:89] found id: ""
	I1210 07:54:52.509019 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.509028 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:52.509035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:52.509100 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:52.534163 1078428 cri.go:89] found id: ""
	I1210 07:54:52.534189 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.534197 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:52.534204 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:52.534262 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:52.559446 1078428 cri.go:89] found id: ""
	I1210 07:54:52.559468 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.559476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:52.559482 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:52.559538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:52.585685 1078428 cri.go:89] found id: ""
	I1210 07:54:52.585705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.585714 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:52.585720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:52.585781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:52.610362 1078428 cri.go:89] found id: ""
	I1210 07:54:52.610387 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.610396 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:52.610429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:52.610553 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:52.639114 1078428 cri.go:89] found id: ""
	I1210 07:54:52.639140 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.639149 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:52.639155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:52.639239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:52.669083 1078428 cri.go:89] found id: ""
	I1210 07:54:52.669111 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.669120 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:52.669129 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:52.669141 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:52.684926 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:52.684953 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:52.749001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:52.749025 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:52.749037 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:52.773227 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:52.773261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.804197 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:52.804276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:54:52.054720 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:54.555065 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:55.368759 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:55.379351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:55.379439 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:55.403912 1078428 cri.go:89] found id: ""
	I1210 07:54:55.403937 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.403946 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:55.403953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:55.404021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:55.432879 1078428 cri.go:89] found id: ""
	I1210 07:54:55.432902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.432912 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:55.432918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:55.432981 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:55.457499 1078428 cri.go:89] found id: ""
	I1210 07:54:55.457528 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.457537 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:55.457546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:55.457605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:55.482796 1078428 cri.go:89] found id: ""
	I1210 07:54:55.482824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.482833 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:55.482840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:55.482900 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:55.508135 1078428 cri.go:89] found id: ""
	I1210 07:54:55.508158 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.508167 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:55.508173 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:55.508239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:55.532757 1078428 cri.go:89] found id: ""
	I1210 07:54:55.532828 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.532849 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:55.532856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:55.532923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:55.558383 1078428 cri.go:89] found id: ""
	I1210 07:54:55.558408 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.558431 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:55.558437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:55.558540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:55.584737 1078428 cri.go:89] found id: ""
	I1210 07:54:55.584768 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.584780 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:55.584790 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:55.584802 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:55.611899 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:55.611929 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:55.667940 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:55.667974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:55.683872 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:55.683902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:55.753488 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:55.753511 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:55.753523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.279433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:58.290275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:58.290358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:58.315732 1078428 cri.go:89] found id: ""
	I1210 07:54:58.315760 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.315769 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:58.315775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:58.315840 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:58.354970 1078428 cri.go:89] found id: ""
	I1210 07:54:58.354993 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.355002 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:58.355009 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:58.355080 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:58.387261 1078428 cri.go:89] found id: ""
	I1210 07:54:58.387290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.387300 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:58.387307 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:58.387366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:58.415659 1078428 cri.go:89] found id: ""
	I1210 07:54:58.415683 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.415691 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:58.415698 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:58.415762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:58.440257 1078428 cri.go:89] found id: ""
	I1210 07:54:58.440283 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.440292 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:58.440298 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:58.440380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:58.465572 1078428 cri.go:89] found id: ""
	I1210 07:54:58.465598 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.465607 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:58.465614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:58.465672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:58.490288 1078428 cri.go:89] found id: ""
	I1210 07:54:58.490313 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.490321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:58.490327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:58.490384 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:58.516549 1078428 cri.go:89] found id: ""
	I1210 07:54:58.516572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.516580 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:58.516590 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:58.516601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.542195 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:58.542234 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:58.570592 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:58.570623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:58.627983 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:58.628020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:58.644192 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:58.644218 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:58.708892 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:57.053952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:59.054069 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:01.209184 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:01.221080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:01.221155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:01.250125 1078428 cri.go:89] found id: ""
	I1210 07:55:01.250154 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.250163 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:01.250178 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:01.250240 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:01.276827 1078428 cri.go:89] found id: ""
	I1210 07:55:01.276854 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.276869 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:01.276876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:01.276938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:01.311772 1078428 cri.go:89] found id: ""
	I1210 07:55:01.311808 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.311818 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:01.311824 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:01.311894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:01.344006 1078428 cri.go:89] found id: ""
	I1210 07:55:01.344042 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.344052 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:01.344059 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:01.344131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:01.370453 1078428 cri.go:89] found id: ""
	I1210 07:55:01.370508 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.370517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:01.370524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:01.370596 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:01.396784 1078428 cri.go:89] found id: ""
	I1210 07:55:01.396811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.396833 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:01.396840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:01.396925 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:01.427026 1078428 cri.go:89] found id: ""
	I1210 07:55:01.427053 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.427064 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:01.427076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:01.427145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:01.453716 1078428 cri.go:89] found id: ""
	I1210 07:55:01.453745 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.453755 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:01.453765 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:01.453787 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:01.483021 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:01.483048 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:01.538363 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:01.538402 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:01.555879 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:01.555912 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:01.624093 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:01.624120 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:01.624136 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.151461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:04.161982 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:04.162052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:04.187914 1078428 cri.go:89] found id: ""
	I1210 07:55:04.187940 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.187955 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:04.187961 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:04.188020 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:04.212016 1078428 cri.go:89] found id: ""
	I1210 07:55:04.212039 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.212048 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:04.212054 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:04.212113 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:04.237062 1078428 cri.go:89] found id: ""
	I1210 07:55:04.237088 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.237098 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:04.237107 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:04.237166 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:04.262844 1078428 cri.go:89] found id: ""
	I1210 07:55:04.262867 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.262876 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:04.262883 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:04.262943 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:04.288099 1078428 cri.go:89] found id: ""
	I1210 07:55:04.288125 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.288134 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:04.288140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:04.288198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:04.315819 1078428 cri.go:89] found id: ""
	I1210 07:55:04.315846 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.315855 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:04.315861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:04.315923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:04.349897 1078428 cri.go:89] found id: ""
	I1210 07:55:04.349919 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.349928 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:04.349934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:04.349992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:04.374228 1078428 cri.go:89] found id: ""
	I1210 07:55:04.374255 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.374264 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:04.374274 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:04.374285 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:04.430541 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:04.430576 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:04.446913 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:04.446947 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:01.054690 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:03.054791 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:04.519646 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:04.519667 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:04.519679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.545056 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:04.545097 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:07.074592 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:07.085572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:07.085640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:07.111394 1078428 cri.go:89] found id: ""
	I1210 07:55:07.111418 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.111426 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:07.111432 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:07.111497 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:07.135823 1078428 cri.go:89] found id: ""
	I1210 07:55:07.135848 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.135857 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:07.135864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:07.135923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:07.164275 1078428 cri.go:89] found id: ""
	I1210 07:55:07.164297 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.164306 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:07.164311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:07.164385 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:07.193334 1078428 cri.go:89] found id: ""
	I1210 07:55:07.193358 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.193367 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:07.193373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:07.193429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:07.217929 1078428 cri.go:89] found id: ""
	I1210 07:55:07.217955 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.217964 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:07.217970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:07.218032 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:07.243152 1078428 cri.go:89] found id: ""
	I1210 07:55:07.243176 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.243185 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:07.243191 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:07.243251 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:07.270888 1078428 cri.go:89] found id: ""
	I1210 07:55:07.270918 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.270927 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:07.270934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:07.270992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:07.304504 1078428 cri.go:89] found id: ""
	I1210 07:55:07.304531 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.304540 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:07.304549 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:07.304561 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:07.370744 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:07.370786 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:07.386532 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:07.386606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:07.450870 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:07.450892 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:07.450906 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:07.476441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:07.476476 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:05.554590 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:08.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:10.006374 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:10.031408 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:10.031500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:10.072527 1078428 cri.go:89] found id: ""
	I1210 07:55:10.072558 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.072568 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:10.072575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:10.072637 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:10.107560 1078428 cri.go:89] found id: ""
	I1210 07:55:10.107605 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.107615 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:10.107621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:10.107694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:10.138416 1078428 cri.go:89] found id: ""
	I1210 07:55:10.138441 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.138450 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:10.138456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:10.138547 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:10.163271 1078428 cri.go:89] found id: ""
	I1210 07:55:10.163294 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.163303 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:10.163309 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:10.163372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:10.193549 1078428 cri.go:89] found id: ""
	I1210 07:55:10.193625 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.193637 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:10.193664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:10.193766 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:10.225083 1078428 cri.go:89] found id: ""
	I1210 07:55:10.225169 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.225182 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:10.225212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:10.225307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:10.251042 1078428 cri.go:89] found id: ""
	I1210 07:55:10.251067 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.251082 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:10.251089 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:10.251175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:10.275656 1078428 cri.go:89] found id: ""
	I1210 07:55:10.275681 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.275690 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:10.275699 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:10.275711 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:10.335591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:10.335628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:10.352546 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:10.352577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:10.421057 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:10.421081 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:10.421094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:10.446445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:10.446578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:12.978285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:12.988877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:12.988951 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:13.014715 1078428 cri.go:89] found id: ""
	I1210 07:55:13.014738 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.014746 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:13.014753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:13.014812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:13.039187 1078428 cri.go:89] found id: ""
	I1210 07:55:13.039217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.039226 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:13.039231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:13.039293 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:13.079663 1078428 cri.go:89] found id: ""
	I1210 07:55:13.079687 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.079696 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:13.079702 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:13.079762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:13.116097 1078428 cri.go:89] found id: ""
	I1210 07:55:13.116118 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.116127 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:13.116133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:13.116190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:13.141856 1078428 cri.go:89] found id: ""
	I1210 07:55:13.141921 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.141946 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:13.141973 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:13.142049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:13.166245 1078428 cri.go:89] found id: ""
	I1210 07:55:13.166318 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.166341 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:13.166361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:13.166452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:13.190766 1078428 cri.go:89] found id: ""
	I1210 07:55:13.190790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.190799 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:13.190805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:13.190864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:13.218179 1078428 cri.go:89] found id: ""
	I1210 07:55:13.218217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.218227 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:13.218253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:13.218270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:13.234044 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:13.234082 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:13.303134 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:13.303158 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:13.303170 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:13.330980 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:13.331017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:13.358836 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:13.358865 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:10.554264 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:13.054017 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:15.055138 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:15.922613 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:15.933295 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:15.933370 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:15.958341 1078428 cri.go:89] found id: ""
	I1210 07:55:15.958364 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.958373 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:15.958378 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:15.958434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:15.983285 1078428 cri.go:89] found id: ""
	I1210 07:55:15.983309 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.983324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:15.983330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:15.983387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:16.008789 1078428 cri.go:89] found id: ""
	I1210 07:55:16.008816 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.008825 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:16.008831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:16.008926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:16.035859 1078428 cri.go:89] found id: ""
	I1210 07:55:16.035931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.035946 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:16.035955 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:16.036022 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:16.068655 1078428 cri.go:89] found id: ""
	I1210 07:55:16.068688 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.068697 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:16.068704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:16.068776 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:16.106754 1078428 cri.go:89] found id: ""
	I1210 07:55:16.106780 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.106790 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:16.106796 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:16.106862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:16.133097 1078428 cri.go:89] found id: ""
	I1210 07:55:16.133124 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.133133 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:16.133139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:16.133207 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:16.157892 1078428 cri.go:89] found id: ""
	I1210 07:55:16.157938 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.157947 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:16.157957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:16.157970 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:16.212808 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:16.212848 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:16.228781 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:16.228813 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:16.291789 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:16.291811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:16.291823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:16.319342 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:16.319380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:18.855190 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:18.865732 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:18.865807 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:18.889830 1078428 cri.go:89] found id: ""
	I1210 07:55:18.889855 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.889864 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:18.889871 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:18.889936 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:18.914345 1078428 cri.go:89] found id: ""
	I1210 07:55:18.914370 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.914379 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:18.914385 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:18.914444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:18.939221 1078428 cri.go:89] found id: ""
	I1210 07:55:18.939243 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.939253 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:18.939258 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:18.939316 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:18.967766 1078428 cri.go:89] found id: ""
	I1210 07:55:18.967788 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.967796 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:18.967803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:18.967867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:18.996962 1078428 cri.go:89] found id: ""
	I1210 07:55:18.996984 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.996992 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:18.996999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:18.997055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:19.023004 1078428 cri.go:89] found id: ""
	I1210 07:55:19.023031 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.023043 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:19.023052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:19.023115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:19.057510 1078428 cri.go:89] found id: ""
	I1210 07:55:19.057540 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.057549 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:19.057555 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:19.057611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:19.092862 1078428 cri.go:89] found id: ""
	I1210 07:55:19.092891 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.092900 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:19.092910 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:19.092921 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:19.150597 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:19.150632 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:19.166174 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:19.166252 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:19.232235 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:19.232259 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:19.232272 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:19.256392 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:19.256424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:17.554658 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:20.054087 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:21.783358 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:21.793821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:21.793896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:21.818542 1078428 cri.go:89] found id: ""
	I1210 07:55:21.818564 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.818573 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:21.818580 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:21.818639 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:21.842392 1078428 cri.go:89] found id: ""
	I1210 07:55:21.842414 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.842423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:21.842429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:21.842509 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:21.869909 1078428 cri.go:89] found id: ""
	I1210 07:55:21.869931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.869940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:21.869947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:21.870009 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:21.896175 1078428 cri.go:89] found id: ""
	I1210 07:55:21.896197 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.896206 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:21.896212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:21.896272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:21.924596 1078428 cri.go:89] found id: ""
	I1210 07:55:21.924672 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.924684 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:21.924691 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:21.924781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:21.952789 1078428 cri.go:89] found id: ""
	I1210 07:55:21.952811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.952820 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:21.952826 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:21.952885 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:21.978579 1078428 cri.go:89] found id: ""
	I1210 07:55:21.978603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.978611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:21.978617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:21.978678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:22.002801 1078428 cri.go:89] found id: ""
	I1210 07:55:22.002829 1078428 logs.go:282] 0 containers: []
	W1210 07:55:22.002838 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:22.002848 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:22.002866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:22.021034 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:22.021067 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:22.101183 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:22.101208 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:22.101223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:22.133557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:22.133593 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:22.160692 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:22.160719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:22.554004 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:25.054003 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:24.716616 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:24.727463 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:24.727545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:24.752976 1078428 cri.go:89] found id: ""
	I1210 07:55:24.753005 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.753014 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:24.753021 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:24.753081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:24.780812 1078428 cri.go:89] found id: ""
	I1210 07:55:24.780841 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.780850 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:24.780856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:24.780913 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:24.806877 1078428 cri.go:89] found id: ""
	I1210 07:55:24.806900 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.806909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:24.806915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:24.806979 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:24.836752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.836785 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.836795 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:24.836809 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:24.836876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:24.863110 1078428 cri.go:89] found id: ""
	I1210 07:55:24.863134 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.863143 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:24.863153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:24.863219 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:24.888190 1078428 cri.go:89] found id: ""
	I1210 07:55:24.888214 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.888223 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:24.888230 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:24.888289 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:24.912349 1078428 cri.go:89] found id: ""
	I1210 07:55:24.912383 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.912394 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:24.912400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:24.912462 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:24.937752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.937781 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.937790 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:24.937799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:24.937811 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:24.992892 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:24.992928 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:25.010173 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:25.010241 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:25.099629 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:25.099713 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:25.099746 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:25.131383 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:25.131423 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:27.663351 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:27.674757 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:27.674843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:27.704367 1078428 cri.go:89] found id: ""
	I1210 07:55:27.704400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.704409 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:27.704420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:27.704484 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:27.731740 1078428 cri.go:89] found id: ""
	I1210 07:55:27.731773 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.731783 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:27.731790 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:27.731852 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:27.761848 1078428 cri.go:89] found id: ""
	I1210 07:55:27.761871 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.761880 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:27.761886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:27.761952 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:27.789498 1078428 cri.go:89] found id: ""
	I1210 07:55:27.789527 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.789537 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:27.789543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:27.789603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:27.815293 1078428 cri.go:89] found id: ""
	I1210 07:55:27.815320 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.815335 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:27.815342 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:27.815401 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:27.840211 1078428 cri.go:89] found id: ""
	I1210 07:55:27.840238 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.840249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:27.840256 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:27.840320 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:27.866289 1078428 cri.go:89] found id: ""
	I1210 07:55:27.866313 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.866323 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:27.866329 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:27.866388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:27.892533 1078428 cri.go:89] found id: ""
	I1210 07:55:27.892560 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.892569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:27.892578 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:27.892590 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:27.952019 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:27.952063 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:27.969597 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:27.969631 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:28.035775 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:28.035802 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:28.035816 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:28.064304 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:28.064344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:27.054076 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:29.054524 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:30.599553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:30.609953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:30.610023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:30.634355 1078428 cri.go:89] found id: ""
	I1210 07:55:30.634384 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.634393 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:30.634400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:30.634460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:30.658396 1078428 cri.go:89] found id: ""
	I1210 07:55:30.658435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.658444 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:30.658450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:30.658540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:30.683976 1078428 cri.go:89] found id: ""
	I1210 07:55:30.684014 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.684023 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:30.684030 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:30.684099 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:30.708278 1078428 cri.go:89] found id: ""
	I1210 07:55:30.708302 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.708311 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:30.708317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:30.708376 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:30.733222 1078428 cri.go:89] found id: ""
	I1210 07:55:30.733253 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.733262 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:30.733269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:30.733368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:30.758588 1078428 cri.go:89] found id: ""
	I1210 07:55:30.758614 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.758623 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:30.758630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:30.758700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:30.783735 1078428 cri.go:89] found id: ""
	I1210 07:55:30.783802 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.783826 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:30.783841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:30.783910 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:30.807833 1078428 cri.go:89] found id: ""
	I1210 07:55:30.807859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.807867 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:30.807876 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:30.807888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:30.872941 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:30.872961 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:30.872975 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:30.899140 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:30.899181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:30.926302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:30.926333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:30.982513 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:30.982550 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.499017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:33.509596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:33.509669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:33.540057 1078428 cri.go:89] found id: ""
	I1210 07:55:33.540082 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.540090 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:33.540097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:33.540160 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:33.570955 1078428 cri.go:89] found id: ""
	I1210 07:55:33.570982 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.570991 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:33.570997 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:33.571056 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:33.605930 1078428 cri.go:89] found id: ""
	I1210 07:55:33.605958 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.605968 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:33.605974 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:33.606036 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:33.634909 1078428 cri.go:89] found id: ""
	I1210 07:55:33.634932 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.634941 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:33.634947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:33.635008 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:33.659844 1078428 cri.go:89] found id: ""
	I1210 07:55:33.659912 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.659927 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:33.659935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:33.659999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:33.684878 1078428 cri.go:89] found id: ""
	I1210 07:55:33.684902 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.684911 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:33.684918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:33.684983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:33.709473 1078428 cri.go:89] found id: ""
	I1210 07:55:33.709496 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.709505 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:33.709517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:33.709580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:33.736059 1078428 cri.go:89] found id: ""
	I1210 07:55:33.736086 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.736095 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:33.736105 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:33.736117 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:33.795512 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:33.795546 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.811254 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:33.811282 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:33.878126 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:33.878148 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:33.878163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:33.904005 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:33.904041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:31.054696 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:33.054864 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:36.431681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:36.442446 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:36.442546 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:36.466520 1078428 cri.go:89] found id: ""
	I1210 07:55:36.466544 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.466553 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:36.466559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:36.466616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:36.497280 1078428 cri.go:89] found id: ""
	I1210 07:55:36.497307 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.497316 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:36.497322 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:36.497382 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:36.526966 1078428 cri.go:89] found id: ""
	I1210 07:55:36.526988 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.526998 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:36.527003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:36.527067 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:36.566317 1078428 cri.go:89] found id: ""
	I1210 07:55:36.566342 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.566351 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:36.566357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:36.566432 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:36.598673 1078428 cri.go:89] found id: ""
	I1210 07:55:36.598699 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.598716 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:36.598722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:36.598795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:36.638514 1078428 cri.go:89] found id: ""
	I1210 07:55:36.638537 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.638545 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:36.638551 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:36.638621 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:36.663534 1078428 cri.go:89] found id: ""
	I1210 07:55:36.663603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.663623 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:36.663630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:36.663715 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:36.692427 1078428 cri.go:89] found id: ""
	I1210 07:55:36.692451 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.692461 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:36.692471 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:36.692482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:36.717965 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:36.718003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:36.749638 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:36.749668 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:36.806519 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:36.806562 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:36.823288 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:36.823315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:36.888077 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.389725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:39.400775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:39.400867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:39.426362 1078428 cri.go:89] found id: ""
	I1210 07:55:39.426389 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.426398 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:39.426407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:39.426555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:39.455943 1078428 cri.go:89] found id: ""
	I1210 07:55:39.455969 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.455978 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:39.455984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:39.456043 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:39.484097 1078428 cri.go:89] found id: ""
	I1210 07:55:39.484127 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.484142 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:39.484150 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:39.484209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1210 07:55:35.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:37.554652 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:40.054927 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:39.510381 1078428 cri.go:89] found id: ""
	I1210 07:55:39.510408 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.510417 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:39.510423 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:39.510508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:39.534754 1078428 cri.go:89] found id: ""
	I1210 07:55:39.534819 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.534838 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:39.534845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:39.534903 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:39.577369 1078428 cri.go:89] found id: ""
	I1210 07:55:39.577400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.577409 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:39.577416 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:39.577519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:39.607302 1078428 cri.go:89] found id: ""
	I1210 07:55:39.607329 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.607348 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:39.607355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:39.607429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:39.637231 1078428 cri.go:89] found id: ""
	I1210 07:55:39.637270 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.637282 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:39.637292 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:39.637305 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:39.694701 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:39.694745 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:39.711729 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:39.711761 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:39.777959 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.777980 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:39.777995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:39.802829 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:39.802869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:42.336278 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:42.348869 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:42.348958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:42.376684 1078428 cri.go:89] found id: ""
	I1210 07:55:42.376751 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.376766 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:42.376774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:42.376834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:42.401855 1078428 cri.go:89] found id: ""
	I1210 07:55:42.401881 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.401890 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:42.401897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:42.401956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:42.429508 1078428 cri.go:89] found id: ""
	I1210 07:55:42.429532 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.429541 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:42.429547 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:42.429605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:42.453954 1078428 cri.go:89] found id: ""
	I1210 07:55:42.453978 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.453988 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:42.453994 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:42.454052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:42.480307 1078428 cri.go:89] found id: ""
	I1210 07:55:42.480372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.480386 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:42.480393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:42.480465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:42.505157 1078428 cri.go:89] found id: ""
	I1210 07:55:42.505189 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.505198 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:42.505205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:42.505272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:42.530482 1078428 cri.go:89] found id: ""
	I1210 07:55:42.530505 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.530513 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:42.530520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:42.530580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:42.563929 1078428 cri.go:89] found id: ""
	I1210 07:55:42.563996 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.564019 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:42.564041 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:42.564081 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:42.627607 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:42.627645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:42.644032 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:42.644059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:42.709684 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:42.709704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:42.709717 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:42.735150 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:42.735190 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:42.554153 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:44.554944 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:45.263314 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:45.276890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:45.276965 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:45.320051 1078428 cri.go:89] found id: ""
	I1210 07:55:45.320079 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.320089 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:45.320096 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:45.320155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:45.357108 1078428 cri.go:89] found id: ""
	I1210 07:55:45.357143 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.357153 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:45.357159 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:45.357235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:45.386251 1078428 cri.go:89] found id: ""
	I1210 07:55:45.386281 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.386290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:45.386296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:45.386355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:45.411934 1078428 cri.go:89] found id: ""
	I1210 07:55:45.411960 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.411969 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:45.411975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:45.412034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:45.438194 1078428 cri.go:89] found id: ""
	I1210 07:55:45.438221 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.438236 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:45.438242 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:45.438299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:45.462840 1078428 cri.go:89] found id: ""
	I1210 07:55:45.462864 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.462874 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:45.462880 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:45.462938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:45.487271 1078428 cri.go:89] found id: ""
	I1210 07:55:45.487296 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.487304 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:45.487311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:45.487368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:45.512829 1078428 cri.go:89] found id: ""
	I1210 07:55:45.512859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.512868 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:45.512877 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:45.512888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:45.592088 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:45.592106 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:45.592119 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:45.625233 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:45.625268 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:45.653443 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:45.653475 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:45.708240 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:45.708280 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.225757 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:48.236296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:48.236369 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:48.261289 1078428 cri.go:89] found id: ""
	I1210 07:55:48.261312 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.261320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:48.261337 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:48.261400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:48.286722 1078428 cri.go:89] found id: ""
	I1210 07:55:48.286746 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.286755 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:48.286761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:48.286819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:48.322426 1078428 cri.go:89] found id: ""
	I1210 07:55:48.322453 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.322484 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:48.322507 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:48.322588 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:48.351023 1078428 cri.go:89] found id: ""
	I1210 07:55:48.351052 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.351062 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:48.351068 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:48.351126 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:48.378519 1078428 cri.go:89] found id: ""
	I1210 07:55:48.378542 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.378550 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:48.378556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:48.378616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:48.403355 1078428 cri.go:89] found id: ""
	I1210 07:55:48.403382 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.403392 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:48.403398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:48.403478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:48.427960 1078428 cri.go:89] found id: ""
	I1210 07:55:48.427986 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.427995 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:48.428001 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:48.428059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:48.451603 1078428 cri.go:89] found id: ""
	I1210 07:55:48.451670 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.451696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:48.451714 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:48.451727 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:48.506052 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:48.506088 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.523423 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:48.523453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:48.594581 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:48.594606 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:48.594619 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:48.622945 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:48.622982 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:47.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:49.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:51.154448 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:51.165850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:51.165926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:51.191582 1078428 cri.go:89] found id: ""
	I1210 07:55:51.191607 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.191615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:51.191622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:51.191681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:51.216289 1078428 cri.go:89] found id: ""
	I1210 07:55:51.216314 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.216324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:51.216331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:51.216390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:51.245299 1078428 cri.go:89] found id: ""
	I1210 07:55:51.245324 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.245333 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:51.245339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:51.245400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:51.269348 1078428 cri.go:89] found id: ""
	I1210 07:55:51.269372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.269380 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:51.269387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:51.269443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:51.296327 1078428 cri.go:89] found id: ""
	I1210 07:55:51.296350 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.296360 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:51.296367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:51.296433 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:51.326976 1078428 cri.go:89] found id: ""
	I1210 07:55:51.326997 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.327005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:51.327011 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:51.327069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:51.360781 1078428 cri.go:89] found id: ""
	I1210 07:55:51.360857 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.360873 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:51.360881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:51.360960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:51.384754 1078428 cri.go:89] found id: ""
	I1210 07:55:51.384779 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.384788 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:51.384799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:51.384810 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:51.443446 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:51.443483 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:51.461527 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:51.461559 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:51.529060 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:51.529096 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:51.529109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:51.561037 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:51.561354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:54.111711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:54.122707 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:54.122781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:54.152821 1078428 cri.go:89] found id: ""
	I1210 07:55:54.152853 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.152867 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:54.152878 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:54.152961 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:54.180559 1078428 cri.go:89] found id: ""
	I1210 07:55:54.180583 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.180591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:54.180598 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:54.180662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:54.208251 1078428 cri.go:89] found id: ""
	I1210 07:55:54.208276 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.208285 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:54.208292 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:54.208349 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:54.233630 1078428 cri.go:89] found id: ""
	I1210 07:55:54.233655 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.233664 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:54.233670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:54.233727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:54.258409 1078428 cri.go:89] found id: ""
	I1210 07:55:54.258435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.258443 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:54.258450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:54.258533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:54.282200 1078428 cri.go:89] found id: ""
	I1210 07:55:54.282234 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.282242 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:54.282248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:54.282306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:54.326329 1078428 cri.go:89] found id: ""
	I1210 07:55:54.326352 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.326361 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:54.326367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:54.326428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:54.353371 1078428 cri.go:89] found id: ""
	I1210 07:55:54.353396 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.353405 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:54.353415 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:54.353429 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:54.412987 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:54.413025 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:54.429633 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:54.429718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:51.553930 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:54.497491 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:54.497530 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:54.497544 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:54.523210 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:54.523247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.066626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:57.077561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:57.077642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:57.102249 1078428 cri.go:89] found id: ""
	I1210 07:55:57.102273 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.102282 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:57.102289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:57.102352 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:57.126387 1078428 cri.go:89] found id: ""
	I1210 07:55:57.126413 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.126421 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:57.126427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:57.126506 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:57.151315 1078428 cri.go:89] found id: ""
	I1210 07:55:57.151341 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.151351 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:57.151357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:57.151417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:57.180045 1078428 cri.go:89] found id: ""
	I1210 07:55:57.180074 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.180083 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:57.180090 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:57.180150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:57.205199 1078428 cri.go:89] found id: ""
	I1210 07:55:57.205225 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.205233 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:57.205240 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:57.205299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:57.233971 1078428 cri.go:89] found id: ""
	I1210 07:55:57.233999 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.234009 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:57.234015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:57.234078 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:57.258568 1078428 cri.go:89] found id: ""
	I1210 07:55:57.258594 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.258604 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:57.258610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:57.258668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:57.282764 1078428 cri.go:89] found id: ""
	I1210 07:55:57.282790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.282800 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:57.282810 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:57.282823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:57.299427 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:57.299453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:57.374740 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:57.374810 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:57.374851 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:57.400786 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:57.400822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.427735 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:57.427767 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:56.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:58.054190 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:00.055015 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:59.984110 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:59.994599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:59.994677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:00.044693 1078428 cri.go:89] found id: ""
	I1210 07:56:00.044863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.044893 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:00.044928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:00.045024 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:00.118046 1078428 cri.go:89] found id: ""
	I1210 07:56:00.118124 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.118150 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:00.118171 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:00.119167 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:00.182111 1078428 cri.go:89] found id: ""
	I1210 07:56:00.182136 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.182145 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:00.182152 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:00.182960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:00.239971 1078428 cri.go:89] found id: ""
	I1210 07:56:00.239996 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.240006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:00.240013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:00.240085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:00.287888 1078428 cri.go:89] found id: ""
	I1210 07:56:00.287927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.287937 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:00.287945 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:00.288014 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:00.352509 1078428 cri.go:89] found id: ""
	I1210 07:56:00.352556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.352566 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:00.352593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:00.352712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:00.421383 1078428 cri.go:89] found id: ""
	I1210 07:56:00.421421 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.421430 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:00.421437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:00.421521 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:00.456737 1078428 cri.go:89] found id: ""
	I1210 07:56:00.456766 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.456776 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:00.456786 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:00.456803 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:00.539348 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:00.539370 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:00.539385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:00.569574 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:00.569616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:00.613655 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:00.613680 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:00.671124 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:00.671163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.187739 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:03.198133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:03.198208 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:03.223791 1078428 cri.go:89] found id: ""
	I1210 07:56:03.223818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.223828 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:03.223834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:03.223894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:03.248620 1078428 cri.go:89] found id: ""
	I1210 07:56:03.248644 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.248653 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:03.248659 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:03.248720 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:03.273951 1078428 cri.go:89] found id: ""
	I1210 07:56:03.273975 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.273985 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:03.273991 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:03.274053 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:03.300277 1078428 cri.go:89] found id: ""
	I1210 07:56:03.300300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.300309 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:03.300315 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:03.300372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:03.332941 1078428 cri.go:89] found id: ""
	I1210 07:56:03.332967 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.332977 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:03.332983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:03.333038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:03.367066 1078428 cri.go:89] found id: ""
	I1210 07:56:03.367091 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.367100 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:03.367106 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:03.367164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:03.391075 1078428 cri.go:89] found id: ""
	I1210 07:56:03.391098 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.391106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:03.391112 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:03.391170 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:03.415021 1078428 cri.go:89] found id: ""
	I1210 07:56:03.415049 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.415058 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:03.415068 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:03.415079 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:03.440424 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:03.440470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:03.468290 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:03.468319 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:03.525567 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:03.525601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.541470 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:03.541505 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:03.626098 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:56:02.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:05.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:06.126647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:06.137759 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:06.137831 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:06.163154 1078428 cri.go:89] found id: ""
	I1210 07:56:06.163181 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.163191 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:06.163198 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:06.163265 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:06.192495 1078428 cri.go:89] found id: ""
	I1210 07:56:06.192521 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.192530 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:06.192536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:06.192615 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:06.220976 1078428 cri.go:89] found id: ""
	I1210 07:56:06.221009 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.221017 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:06.221025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:06.221134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:06.246400 1078428 cri.go:89] found id: ""
	I1210 07:56:06.246427 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.246436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:06.246442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:06.246523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:06.272644 1078428 cri.go:89] found id: ""
	I1210 07:56:06.272667 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.272675 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:06.272681 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:06.272738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:06.300567 1078428 cri.go:89] found id: ""
	I1210 07:56:06.300636 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.300648 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:06.300655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:06.300726 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:06.332683 1078428 cri.go:89] found id: ""
	I1210 07:56:06.332750 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.332773 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:06.332795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:06.332881 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:06.366018 1078428 cri.go:89] found id: ""
	I1210 07:56:06.366099 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.366124 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:06.366149 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:06.366177 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:06.422922 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:06.422958 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:06.439199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:06.439231 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:06.512644 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:06.512669 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:06.512682 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:06.537590 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:06.537625 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:09.085608 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:09.095930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:09.096006 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:09.119422 1078428 cri.go:89] found id: ""
	I1210 07:56:09.119445 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.119454 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:09.119460 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:09.119518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:09.145193 1078428 cri.go:89] found id: ""
	I1210 07:56:09.145220 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.145230 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:09.145236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:09.145296 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:09.170538 1078428 cri.go:89] found id: ""
	I1210 07:56:09.170567 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.170576 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:09.170582 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:09.170640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:09.199713 1078428 cri.go:89] found id: ""
	I1210 07:56:09.199741 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.199749 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:09.199756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:09.199815 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:09.224005 1078428 cri.go:89] found id: ""
	I1210 07:56:09.224037 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.224046 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:09.224053 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:09.224112 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:09.254251 1078428 cri.go:89] found id: ""
	I1210 07:56:09.254273 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.254283 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:09.254290 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:09.254348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:09.280458 1078428 cri.go:89] found id: ""
	I1210 07:56:09.280484 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.280493 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:09.280500 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:09.280565 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:09.320912 1078428 cri.go:89] found id: ""
	I1210 07:56:09.320943 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.320952 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:09.320961 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:09.320974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:09.386817 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:09.386854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:09.402878 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:09.402954 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:09.472013 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:09.472092 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:09.472114 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:56:07.054571 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:09.054701 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:09.497983 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:09.498020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.030207 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:12.040966 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:12.041087 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:12.069314 1078428 cri.go:89] found id: ""
	I1210 07:56:12.069346 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.069356 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:12.069362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:12.069424 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:12.096321 1078428 cri.go:89] found id: ""
	I1210 07:56:12.096400 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.096423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:12.096438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:12.096519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:12.122859 1078428 cri.go:89] found id: ""
	I1210 07:56:12.122887 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.122896 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:12.122903 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:12.122985 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:12.148481 1078428 cri.go:89] found id: ""
	I1210 07:56:12.148505 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.148514 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:12.148520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:12.148633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:12.172954 1078428 cri.go:89] found id: ""
	I1210 07:56:12.172978 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.172995 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:12.173003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:12.173063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:12.198414 1078428 cri.go:89] found id: ""
	I1210 07:56:12.198436 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.198446 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:12.198453 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:12.198530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:12.227549 1078428 cri.go:89] found id: ""
	I1210 07:56:12.227576 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.227586 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:12.227592 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:12.227651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:12.255277 1078428 cri.go:89] found id: ""
	I1210 07:56:12.255300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.255309 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:12.255318 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:12.255330 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:12.343072 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:12.343095 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:12.343109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:12.370845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:12.370884 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.401190 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:12.401217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:12.456146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:12.456181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:56:11.554344 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:13.554843 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:14.972152 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:14.983046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:14.983121 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:15.031099 1078428 cri.go:89] found id: ""
	I1210 07:56:15.031183 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.031217 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:15.031260 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:15.031373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:15.061619 1078428 cri.go:89] found id: ""
	I1210 07:56:15.061646 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.061655 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:15.061662 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:15.061728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:15.088678 1078428 cri.go:89] found id: ""
	I1210 07:56:15.088701 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.088709 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:15.088716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:15.088781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:15.118776 1078428 cri.go:89] found id: ""
	I1210 07:56:15.118854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.118872 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:15.118881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:15.118945 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:15.144691 1078428 cri.go:89] found id: ""
	I1210 07:56:15.144717 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.144727 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:15.144734 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:15.144799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:15.169827 1078428 cri.go:89] found id: ""
	I1210 07:56:15.169854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.169863 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:15.169870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:15.169927 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:15.196425 1078428 cri.go:89] found id: ""
	I1210 07:56:15.196459 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.196468 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:15.196474 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:15.196533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:15.221736 1078428 cri.go:89] found id: ""
	I1210 07:56:15.221763 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.221772 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:15.221782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:15.221794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:15.237860 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:15.237890 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:15.309823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:15.309847 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:15.309860 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:15.342939 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:15.342990 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:15.376812 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:15.376839 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:17.934235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:17.945317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:17.945396 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:17.971659 1078428 cri.go:89] found id: ""
	I1210 07:56:17.971685 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.971694 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:17.971700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:17.971753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:17.996434 1078428 cri.go:89] found id: ""
	I1210 07:56:17.996476 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.996488 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:17.996495 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:17.996560 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:18.024303 1078428 cri.go:89] found id: ""
	I1210 07:56:18.024338 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.024347 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:18.024354 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:18.024416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:18.049317 1078428 cri.go:89] found id: ""
	I1210 07:56:18.049344 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.049353 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:18.049360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:18.049421 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:18.079586 1078428 cri.go:89] found id: ""
	I1210 07:56:18.079611 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.079620 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:18.079627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:18.079686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:18.108486 1078428 cri.go:89] found id: ""
	I1210 07:56:18.108511 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.108519 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:18.108526 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:18.108601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:18.137645 1078428 cri.go:89] found id: ""
	I1210 07:56:18.137671 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.137680 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:18.137686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:18.137767 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:18.161838 1078428 cri.go:89] found id: ""
	I1210 07:56:18.161863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.161874 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:18.161883 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:18.161916 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:18.235505 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:18.235526 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:18.235539 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:18.260551 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:18.260589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:18.288267 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:18.288296 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:18.349132 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:18.349215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:56:16.054030 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:18.054084 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:20.868569 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:20.879574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:20.879649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:20.904201 1078428 cri.go:89] found id: ""
	I1210 07:56:20.904226 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.904235 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:20.904241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:20.904299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:20.929396 1078428 cri.go:89] found id: ""
	I1210 07:56:20.929423 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.929432 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:20.929439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:20.929514 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:20.954953 1078428 cri.go:89] found id: ""
	I1210 07:56:20.954984 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.954993 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:20.954999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:20.955058 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:20.978741 1078428 cri.go:89] found id: ""
	I1210 07:56:20.978767 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.978776 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:20.978782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:20.978841 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:21.003286 1078428 cri.go:89] found id: ""
	I1210 07:56:21.003313 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.003323 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:21.003330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:21.003402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:21.034505 1078428 cri.go:89] found id: ""
	I1210 07:56:21.034527 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.034536 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:21.034543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:21.034605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:21.058861 1078428 cri.go:89] found id: ""
	I1210 07:56:21.058885 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.058894 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:21.058900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:21.058958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:21.082740 1078428 cri.go:89] found id: ""
	I1210 07:56:21.082764 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.082773 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:21.082782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:21.082794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:21.098247 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:21.098276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:21.161962 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:21.161982 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:21.161995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:21.187272 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:21.187314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:21.214180 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:21.214213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:23.769450 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:23.780372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:23.780505 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:23.817607 1078428 cri.go:89] found id: ""
	I1210 07:56:23.817631 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.817641 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:23.817648 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:23.817709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:23.848903 1078428 cri.go:89] found id: ""
	I1210 07:56:23.848927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.848949 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:23.848960 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:23.849023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:23.877281 1078428 cri.go:89] found id: ""
	I1210 07:56:23.877305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.877314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:23.877320 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:23.877387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:23.903972 1078428 cri.go:89] found id: ""
	I1210 07:56:23.903997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.904006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:23.904013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:23.904089 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:23.929481 1078428 cri.go:89] found id: ""
	I1210 07:56:23.929508 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.929517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:23.929525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:23.929586 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:23.954626 1078428 cri.go:89] found id: ""
	I1210 07:56:23.954665 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.954676 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:23.954683 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:23.954785 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:23.980069 1078428 cri.go:89] found id: ""
	I1210 07:56:23.980102 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.980111 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:23.980117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:23.980176 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:24.005963 1078428 cri.go:89] found id: ""
	I1210 07:56:24.005987 1078428 logs.go:282] 0 containers: []
	W1210 07:56:24.005996 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:24.006006 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:24.006017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:24.036028 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:24.036065 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:24.065541 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:24.065571 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:24.126584 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:24.126630 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:24.143358 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:24.143391 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:24.208974 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:56:20.554242 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:22.554679 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:25.054999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:26.710619 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:26.721267 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:26.721343 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:26.746073 1078428 cri.go:89] found id: ""
	I1210 07:56:26.746100 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.746109 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:26.746115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:26.746178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:26.772432 1078428 cri.go:89] found id: ""
	I1210 07:56:26.772456 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.772472 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:26.772479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:26.772538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:26.809928 1078428 cri.go:89] found id: ""
	I1210 07:56:26.809954 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.809964 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:26.809970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:26.810026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:26.837500 1078428 cri.go:89] found id: ""
	I1210 07:56:26.837522 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.837531 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:26.837538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:26.837592 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:26.864667 1078428 cri.go:89] found id: ""
	I1210 07:56:26.864693 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.864702 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:26.864708 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:26.864768 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:26.892330 1078428 cri.go:89] found id: ""
	I1210 07:56:26.892359 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.892368 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:26.892374 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:26.892457 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:26.916781 1078428 cri.go:89] found id: ""
	I1210 07:56:26.916807 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.916815 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:26.916822 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:26.916902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:26.945103 1078428 cri.go:89] found id: ""
	I1210 07:56:26.945128 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.945137 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:26.945147 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:26.945178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:27.001893 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:27.001933 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:27.020119 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:27.020149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:27.092626 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:27.092690 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:27.092712 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:27.118838 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:27.118873 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:27.554852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:29.554968 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:29.646997 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:29.659058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:29.659139 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:29.684417 1078428 cri.go:89] found id: ""
	I1210 07:56:29.684442 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.684452 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:29.684459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:29.684532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:29.713716 1078428 cri.go:89] found id: ""
	I1210 07:56:29.713747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.713756 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:29.713762 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:29.713829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:29.742671 1078428 cri.go:89] found id: ""
	I1210 07:56:29.742747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.742761 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:29.742769 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:29.742834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:29.767461 1078428 cri.go:89] found id: ""
	I1210 07:56:29.767488 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.767497 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:29.767503 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:29.767590 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:29.791629 1078428 cri.go:89] found id: ""
	I1210 07:56:29.791655 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.791664 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:29.791670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:29.791728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:29.822213 1078428 cri.go:89] found id: ""
	I1210 07:56:29.822240 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.822249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:29.822255 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:29.822317 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:29.854606 1078428 cri.go:89] found id: ""
	I1210 07:56:29.854633 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.854643 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:29.854649 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:29.854709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:29.880033 1078428 cri.go:89] found id: ""
	I1210 07:56:29.880059 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.880068 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:29.880077 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:29.880090 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:29.948475 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:29.948498 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:29.948512 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:29.974136 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:29.974171 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:30.013967 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:30.014008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:30.097748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:30.097788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.617610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:32.628661 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:32.628735 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:32.652564 1078428 cri.go:89] found id: ""
	I1210 07:56:32.652594 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.652603 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:32.652610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:32.652668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:32.680277 1078428 cri.go:89] found id: ""
	I1210 07:56:32.680302 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.680310 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:32.680317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:32.680379 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:32.704183 1078428 cri.go:89] found id: ""
	I1210 07:56:32.704207 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.704216 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:32.704222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:32.704285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:32.729141 1078428 cri.go:89] found id: ""
	I1210 07:56:32.729165 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.729174 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:32.729180 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:32.729237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:32.753460 1078428 cri.go:89] found id: ""
	I1210 07:56:32.753482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.753490 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:32.753496 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:32.753562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:32.781036 1078428 cri.go:89] found id: ""
	I1210 07:56:32.781061 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.781069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:32.781076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:32.781131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:32.816565 1078428 cri.go:89] found id: ""
	I1210 07:56:32.816586 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.816594 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:32.816599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:32.816655 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:32.848807 1078428 cri.go:89] found id: ""
	I1210 07:56:32.848832 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.848841 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:32.848849 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:32.848861 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:32.908343 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:32.908379 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.924367 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:32.924396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:32.994542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:32.994565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:32.994581 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:33.024802 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:33.024842 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:32.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:34.554950 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:35.557491 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:35.568723 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:35.568795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:35.601157 1078428 cri.go:89] found id: ""
	I1210 07:56:35.601184 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.601193 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:35.601200 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:35.601260 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:35.628459 1078428 cri.go:89] found id: ""
	I1210 07:56:35.628494 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.628503 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:35.628509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:35.628570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:35.656310 1078428 cri.go:89] found id: ""
	I1210 07:56:35.656332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.656342 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:35.656348 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:35.656404 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:35.680954 1078428 cri.go:89] found id: ""
	I1210 07:56:35.680980 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.680992 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:35.680998 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:35.681055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:35.708548 1078428 cri.go:89] found id: ""
	I1210 07:56:35.708575 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.708584 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:35.708590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:35.708648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:35.736013 1078428 cri.go:89] found id: ""
	I1210 07:56:35.736040 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.736049 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:35.736056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:35.736124 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:35.760465 1078428 cri.go:89] found id: ""
	I1210 07:56:35.760495 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.760504 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:35.760511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:35.760574 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:35.785429 1078428 cri.go:89] found id: ""
	I1210 07:56:35.785451 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.785460 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:35.785469 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:35.785481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:35.871280 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:35.871302 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:35.871315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:35.897087 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:35.897124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:35.925107 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:35.925134 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:35.981188 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:35.981270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.499048 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:38.509835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:38.509908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:38.534615 1078428 cri.go:89] found id: ""
	I1210 07:56:38.534637 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.534645 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:38.534652 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:38.534708 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:38.576309 1078428 cri.go:89] found id: ""
	I1210 07:56:38.576332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.576341 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:38.576347 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:38.576407 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:38.611259 1078428 cri.go:89] found id: ""
	I1210 07:56:38.611281 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.611290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:38.611297 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:38.611357 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:38.637583 1078428 cri.go:89] found id: ""
	I1210 07:56:38.637612 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.637621 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:38.637627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:38.637686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:38.662187 1078428 cri.go:89] found id: ""
	I1210 07:56:38.662267 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.662290 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:38.662310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:38.662402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:38.686838 1078428 cri.go:89] found id: ""
	I1210 07:56:38.686861 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.686869 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:38.686876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:38.686933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:38.710788 1078428 cri.go:89] found id: ""
	I1210 07:56:38.710815 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.710824 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:38.710831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:38.710930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:38.736531 1078428 cri.go:89] found id: ""
	I1210 07:56:38.736556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.736565 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:38.736575 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:38.736589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.752335 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:38.752364 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:38.826607 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:38.826675 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:38.826688 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:38.854204 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:38.854240 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:38.883619 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:38.883647 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:37.054712 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:39.554110 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:41.439316 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:41.450451 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:41.450532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:41.476998 1078428 cri.go:89] found id: ""
	I1210 07:56:41.477022 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.477030 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:41.477036 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:41.477096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:41.502043 1078428 cri.go:89] found id: ""
	I1210 07:56:41.502069 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.502078 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:41.502084 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:41.502145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:41.526905 1078428 cri.go:89] found id: ""
	I1210 07:56:41.526931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.526940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:41.526947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:41.527007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:41.558750 1078428 cri.go:89] found id: ""
	I1210 07:56:41.558779 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.558788 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:41.558795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:41.558851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:41.596637 1078428 cri.go:89] found id: ""
	I1210 07:56:41.596664 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.596674 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:41.596680 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:41.596742 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:41.622316 1078428 cri.go:89] found id: ""
	I1210 07:56:41.622340 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.622348 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:41.622355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:41.622418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:41.648410 1078428 cri.go:89] found id: ""
	I1210 07:56:41.648482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.648511 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:41.648518 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:41.648581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:41.680776 1078428 cri.go:89] found id: ""
	I1210 07:56:41.680802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.680811 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:41.680820 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:41.680832 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:41.708185 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:41.708211 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:41.767625 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:41.767662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:41.784949 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:41.784980 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:41.871610 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:41.871632 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:41.871645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.398611 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:44.408733 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:44.408806 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:44.432507 1078428 cri.go:89] found id: ""
	I1210 07:56:44.432531 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.432540 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:44.432546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:44.432607 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:44.457597 1078428 cri.go:89] found id: ""
	I1210 07:56:44.457622 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.457631 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:44.457637 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:44.457697 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:44.485123 1078428 cri.go:89] found id: ""
	I1210 07:56:44.485149 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.485158 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:44.485165 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:44.485228 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1210 07:56:42.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:44.054891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:44.510813 1078428 cri.go:89] found id: ""
	I1210 07:56:44.510848 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.510857 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:44.510870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:44.510929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:44.534504 1078428 cri.go:89] found id: ""
	I1210 07:56:44.534528 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.534537 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:44.534543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:44.534600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:44.574866 1078428 cri.go:89] found id: ""
	I1210 07:56:44.574940 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.574962 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:44.574983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:44.575074 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:44.605450 1078428 cri.go:89] found id: ""
	I1210 07:56:44.605523 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.605546 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:44.605566 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:44.605652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:44.633965 1078428 cri.go:89] found id: ""
	I1210 07:56:44.634039 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.634064 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:44.634087 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:44.634124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:44.692591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:44.692628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:44.708687 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:44.708718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:44.774532 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:44.774581 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:44.774594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.801145 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:44.801235 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.336116 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:47.346722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:47.346793 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:47.370822 1078428 cri.go:89] found id: ""
	I1210 07:56:47.370860 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.370870 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:47.370876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:47.370948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:47.401111 1078428 cri.go:89] found id: ""
	I1210 07:56:47.401140 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.401149 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:47.401155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:47.401212 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:47.430968 1078428 cri.go:89] found id: ""
	I1210 07:56:47.430991 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.430999 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:47.431004 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:47.431063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:47.455626 1078428 cri.go:89] found id: ""
	I1210 07:56:47.455650 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.455659 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:47.455665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:47.455722 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:47.479857 1078428 cri.go:89] found id: ""
	I1210 07:56:47.479882 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.479890 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:47.479896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:47.479959 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:47.504271 1078428 cri.go:89] found id: ""
	I1210 07:56:47.504294 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.504305 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:47.504312 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:47.504373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:47.532761 1078428 cri.go:89] found id: ""
	I1210 07:56:47.532837 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.532863 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:47.532886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:47.532990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:47.570086 1078428 cri.go:89] found id: ""
	I1210 07:56:47.570108 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.570116 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:47.570125 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:47.570137 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:47.586049 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:47.586078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:47.655434 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:47.655455 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:47.655470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:47.680757 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:47.680794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.708957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:47.708986 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:46.554013 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:49.054042 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:50.265598 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:50.276268 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:50.276342 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:50.301484 1078428 cri.go:89] found id: ""
	I1210 07:56:50.301507 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.301515 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:50.301521 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:50.301582 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:50.327230 1078428 cri.go:89] found id: ""
	I1210 07:56:50.327255 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.327264 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:50.327270 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:50.327331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:50.352201 1078428 cri.go:89] found id: ""
	I1210 07:56:50.352224 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.352233 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:50.352239 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:50.352299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:50.377546 1078428 cri.go:89] found id: ""
	I1210 07:56:50.377571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.377580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:50.377586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:50.377647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:50.403517 1078428 cri.go:89] found id: ""
	I1210 07:56:50.403544 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.403552 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:50.403559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:50.403635 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:50.432794 1078428 cri.go:89] found id: ""
	I1210 07:56:50.432820 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.432829 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:50.432835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:50.432924 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:50.456905 1078428 cri.go:89] found id: ""
	I1210 07:56:50.456931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.456941 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:50.456947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:50.457013 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:50.488840 1078428 cri.go:89] found id: ""
	I1210 07:56:50.488908 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.488932 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:50.488949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:50.488962 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:50.547966 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:50.548000 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:50.565711 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:50.565789 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:50.652776 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:50.652800 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:50.652815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:50.678909 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:50.678950 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.207825 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:53.218403 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:53.218500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:53.244529 1078428 cri.go:89] found id: ""
	I1210 07:56:53.244556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.244565 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:53.244572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:53.244629 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:53.270382 1078428 cri.go:89] found id: ""
	I1210 07:56:53.270408 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.270418 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:53.270424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:53.270517 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:53.295316 1078428 cri.go:89] found id: ""
	I1210 07:56:53.295342 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.295352 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:53.295358 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:53.295425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:53.324326 1078428 cri.go:89] found id: ""
	I1210 07:56:53.324351 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.324360 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:53.324367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:53.324444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:53.349399 1078428 cri.go:89] found id: ""
	I1210 07:56:53.349425 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.349435 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:53.349441 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:53.349555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:53.374280 1078428 cri.go:89] found id: ""
	I1210 07:56:53.374305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.374314 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:53.374321 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:53.374431 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:53.398894 1078428 cri.go:89] found id: ""
	I1210 07:56:53.398920 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.398929 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:53.398935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:53.398992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:53.423872 1078428 cri.go:89] found id: ""
	I1210 07:56:53.423897 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.423907 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:53.423920 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:53.423936 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:53.440226 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:53.440258 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:53.503949 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:53.503975 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:53.503989 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:53.530691 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:53.530737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.577761 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:53.577835 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:51.054085 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:53.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:56.142597 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:56.153164 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:56.153234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:56.177358 1078428 cri.go:89] found id: ""
	I1210 07:56:56.177391 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.177400 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:56.177406 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:56.177475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:56.202573 1078428 cri.go:89] found id: ""
	I1210 07:56:56.202641 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.202657 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:56.202664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:56.202725 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:56.226758 1078428 cri.go:89] found id: ""
	I1210 07:56:56.226785 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.226795 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:56.226802 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:56.226891 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:56.250286 1078428 cri.go:89] found id: ""
	I1210 07:56:56.250310 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.250319 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:56.250327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:56.250381 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:56.276297 1078428 cri.go:89] found id: ""
	I1210 07:56:56.276375 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.276391 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:56.276398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:56.276458 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:56.301334 1078428 cri.go:89] found id: ""
	I1210 07:56:56.301366 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.301375 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:56.301382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:56.301450 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:56.325521 1078428 cri.go:89] found id: ""
	I1210 07:56:56.325557 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.325566 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:56.325572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:56.325640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:56.351180 1078428 cri.go:89] found id: ""
	I1210 07:56:56.351219 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.351228 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:56.351237 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:56.351249 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:56.406556 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:56.406592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:56.422756 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:56.422788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:56.486945 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:56.486967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:56.486983 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:56.512575 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:56.512616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:59.046618 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:59.059092 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:59.059161 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:59.089542 1078428 cri.go:89] found id: ""
	I1210 07:56:59.089571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.089580 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:59.089586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:59.089648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:59.118669 1078428 cri.go:89] found id: ""
	I1210 07:56:59.118691 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.118700 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:59.118706 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:59.118770 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:59.143775 1078428 cri.go:89] found id: ""
	I1210 07:56:59.143802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.143814 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:59.143821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:59.143880 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:59.167972 1078428 cri.go:89] found id: ""
	I1210 07:56:59.167997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.168006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:59.168012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:59.168088 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:59.195291 1078428 cri.go:89] found id: ""
	I1210 07:56:59.195316 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.195325 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:59.195331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:59.195434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:59.219900 1078428 cri.go:89] found id: ""
	I1210 07:56:59.219928 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.219937 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:59.219943 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:59.220002 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:59.252792 1078428 cri.go:89] found id: ""
	I1210 07:56:59.252818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.252827 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:59.252834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:59.252894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:59.281785 1078428 cri.go:89] found id: ""
	I1210 07:56:59.281808 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.281823 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:59.281832 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:59.281843 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:59.337457 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:59.337496 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:59.353622 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:59.353650 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:59.423704 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:59.423725 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:59.423739 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:59.449814 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:59.449853 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:55.554362 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:57.554656 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:59.554765 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:01.979246 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:01.990999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:01.991072 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:02.022990 1078428 cri.go:89] found id: ""
	I1210 07:57:02.023028 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.023038 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:02.023046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:02.023109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:02.050830 1078428 cri.go:89] found id: ""
	I1210 07:57:02.050857 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.050867 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:02.050873 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:02.050930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:02.080878 1078428 cri.go:89] found id: ""
	I1210 07:57:02.080901 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.080909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:02.080915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:02.080974 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:02.111744 1078428 cri.go:89] found id: ""
	I1210 07:57:02.111766 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.111774 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:02.111780 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:02.111838 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:02.139560 1078428 cri.go:89] found id: ""
	I1210 07:57:02.139587 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.139596 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:02.139602 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:02.139662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:02.164249 1078428 cri.go:89] found id: ""
	I1210 07:57:02.164274 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.164282 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:02.164289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:02.164347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:02.191165 1078428 cri.go:89] found id: ""
	I1210 07:57:02.191187 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.191196 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:02.191202 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:02.191280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:02.220305 1078428 cri.go:89] found id: ""
	I1210 07:57:02.220371 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.220395 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:02.220419 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:02.220447 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:02.275451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:02.275490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:02.291722 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:02.291797 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:02.357294 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:02.357319 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:02.357333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:02.382557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:02.382591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:57:02.053955 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:04.553976 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:04.913285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:04.924140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:04.924214 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:04.949752 1078428 cri.go:89] found id: ""
	I1210 07:57:04.949787 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.949796 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:04.949803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:04.949869 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:04.974850 1078428 cri.go:89] found id: ""
	I1210 07:57:04.974876 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.974886 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:04.974892 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:04.974949 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:04.999787 1078428 cri.go:89] found id: ""
	I1210 07:57:04.999853 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.999868 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:04.999875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:04.999937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:05.031544 1078428 cri.go:89] found id: ""
	I1210 07:57:05.031570 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.031580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:05.031586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:05.031644 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:05.068235 1078428 cri.go:89] found id: ""
	I1210 07:57:05.068262 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.068272 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:05.068278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:05.068337 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:05.101435 1078428 cri.go:89] found id: ""
	I1210 07:57:05.101462 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.101472 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:05.101479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:05.101545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:05.129616 1078428 cri.go:89] found id: ""
	I1210 07:57:05.129640 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.129648 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:05.129654 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:05.129733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:05.155520 1078428 cri.go:89] found id: ""
	I1210 07:57:05.155544 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.155553 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:05.155563 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:05.155575 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:05.212400 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:05.212436 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:05.228606 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:05.228643 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:05.292822 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:05.292845 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:05.292858 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:05.318694 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:05.318732 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:07.846610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:07.857861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:07.857939 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:07.885093 1078428 cri.go:89] found id: ""
	I1210 07:57:07.885115 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.885124 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:07.885130 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:07.885192 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:07.909018 1078428 cri.go:89] found id: ""
	I1210 07:57:07.909043 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.909052 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:07.909058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:07.909116 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:07.935262 1078428 cri.go:89] found id: ""
	I1210 07:57:07.935288 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.935298 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:07.935303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:07.935366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:07.959939 1078428 cri.go:89] found id: ""
	I1210 07:57:07.959965 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.959974 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:07.959981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:07.960039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:07.991314 1078428 cri.go:89] found id: ""
	I1210 07:57:07.991341 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.991350 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:07.991356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:07.991415 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:08.020601 1078428 cri.go:89] found id: ""
	I1210 07:57:08.020628 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.020638 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:08.020645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:08.020709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:08.049221 1078428 cri.go:89] found id: ""
	I1210 07:57:08.049250 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.049259 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:08.049265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:08.049323 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:08.078839 1078428 cri.go:89] found id: ""
	I1210 07:57:08.078862 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.078870 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:08.078883 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:08.078896 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:08.098811 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:08.098888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:08.168958 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:08.169024 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:08.169046 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:08.195261 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:08.195297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:08.222093 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:08.222121 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:57:06.554902 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:09.054181 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:10.778721 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:10.791524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:10.791597 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:10.819485 1078428 cri.go:89] found id: ""
	I1210 07:57:10.819507 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.819519 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:10.819525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:10.819585 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:10.872623 1078428 cri.go:89] found id: ""
	I1210 07:57:10.872646 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.872654 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:10.872660 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:10.872724 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:10.898357 1078428 cri.go:89] found id: ""
	I1210 07:57:10.898378 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.898387 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:10.898393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:10.898448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:10.923976 1078428 cri.go:89] found id: ""
	I1210 07:57:10.924000 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.924009 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:10.924016 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:10.924095 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:10.952951 1078428 cri.go:89] found id: ""
	I1210 07:57:10.952986 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.952996 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:10.953002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:10.953069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:10.977761 1078428 cri.go:89] found id: ""
	I1210 07:57:10.977793 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.977802 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:10.977808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:10.977878 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:11.009022 1078428 cri.go:89] found id: ""
	I1210 07:57:11.009052 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.009069 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:11.009076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:11.009147 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:11.034444 1078428 cri.go:89] found id: ""
	I1210 07:57:11.034493 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.034502 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:11.034512 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:11.034523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:11.098059 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:11.098096 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:11.117339 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:11.117370 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:11.190897 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:11.190919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:11.190932 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:11.215685 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:11.215722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:13.744333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:13.754962 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:13.755031 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:13.783588 1078428 cri.go:89] found id: ""
	I1210 07:57:13.783611 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.783619 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:13.783625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:13.783683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:13.819100 1078428 cri.go:89] found id: ""
	I1210 07:57:13.819122 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.819130 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:13.819136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:13.819193 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:13.860234 1078428 cri.go:89] found id: ""
	I1210 07:57:13.860257 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.860266 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:13.860272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:13.860332 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:13.886331 1078428 cri.go:89] found id: ""
	I1210 07:57:13.886406 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.886418 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:13.886424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:13.886540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:13.911054 1078428 cri.go:89] found id: ""
	I1210 07:57:13.911080 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.911089 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:13.911097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:13.911172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:13.934983 1078428 cri.go:89] found id: ""
	I1210 07:57:13.935051 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.935066 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:13.935073 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:13.935131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:13.960415 1078428 cri.go:89] found id: ""
	I1210 07:57:13.960440 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.960449 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:13.960455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:13.960538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:13.985917 1078428 cri.go:89] found id: ""
	I1210 07:57:13.985964 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.985974 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:13.985983 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:13.985995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:14.046091 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:14.046336 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:14.068485 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:14.068513 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:14.145212 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:14.145235 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:14.145248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:14.170375 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:14.170409 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:57:11.553974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:13.554028 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:15.554374 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:17.554945 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:19.054633 1077343 node_ready.go:38] duration metric: took 6m0.001135979s for node "no-preload-587009" to be "Ready" ...
	I1210 07:57:19.057729 1077343 out.go:203] 
	W1210 07:57:19.060573 1077343 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:57:19.060592 1077343 out.go:285] * 
	W1210 07:57:19.062943 1077343 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:57:19.065570 1077343 out.go:203] 
	I1210 07:57:16.699528 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:16.710231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:16.710301 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:16.734299 1078428 cri.go:89] found id: ""
	I1210 07:57:16.734325 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.734333 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:16.734339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:16.734402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:16.759890 1078428 cri.go:89] found id: ""
	I1210 07:57:16.759916 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.759925 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:16.759934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:16.760017 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:16.788155 1078428 cri.go:89] found id: ""
	I1210 07:57:16.788181 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.788191 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:16.788197 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:16.788256 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:16.817801 1078428 cri.go:89] found id: ""
	I1210 07:57:16.817828 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.817837 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:16.817844 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:16.817904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:16.845878 1078428 cri.go:89] found id: ""
	I1210 07:57:16.845905 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.845913 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:16.845919 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:16.845975 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:16.873613 1078428 cri.go:89] found id: ""
	I1210 07:57:16.873641 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.873651 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:16.873658 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:16.873719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:16.898666 1078428 cri.go:89] found id: ""
	I1210 07:57:16.898689 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.898698 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:16.898704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:16.898762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:16.922533 1078428 cri.go:89] found id: ""
	I1210 07:57:16.922560 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.922569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:16.922579 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:16.922591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:16.948298 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:16.948341 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:16.976671 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:16.976699 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:17.033642 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:17.033681 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:17.052529 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:17.052568 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:17.131312 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820886372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820897753Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820941675Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820957323Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820967374Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820979354Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820991735Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821002452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821025221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821069053Z" level=info msg="Connect containerd service"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821339826Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821931810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.835633697Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.835889266Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.835806303Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.838543186Z" level=info msg="Start recovering state"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862645834Z" level=info msg="Start event monitor"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862821648Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862884336Z" level=info msg="Start streaming server"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862946598Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.863002574Z" level=info msg="runtime interface starting up..."
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.863060848Z" level=info msg="starting plugins..."
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.863142670Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:51:16 no-preload-587009 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.866796941Z" level=info msg="containerd successfully booted in 0.072064s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:21.597045    3924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:21.597450    3924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:21.599104    3924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:21.599820    3924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:21.601427    3924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:57:21 up  6:39,  0 user,  load average: 0.48, 0.62, 1.22
	Linux no-preload-587009 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:57:18 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:19 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 10 07:57:19 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:19 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:19 no-preload-587009 kubelet[3801]: E1210 07:57:19.147676    3801 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:19 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:19 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:19 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 10 07:57:19 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:19 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:19 no-preload-587009 kubelet[3822]: E1210 07:57:19.899168    3822 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:19 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:19 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:20 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 10 07:57:20 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:20 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:20 no-preload-587009 kubelet[3828]: E1210 07:57:20.610839    3828 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:20 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:20 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:21 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 10 07:57:21 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:21 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:21 no-preload-587009 kubelet[3854]: E1210 07:57:21.354353    3854 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:21 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:21 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009: exit status 2 (501.560344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-587009" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (373.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (374.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1210 07:51:43.005612  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:52:35.782502  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:54:16.545830  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:55:14.424592  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:55:24.251183  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:56:43.016712  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m8.396337083s)

                                                
                                                
-- stdout --
	* [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:51:14.495415 1078428 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:51:14.495519 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495524 1078428 out.go:374] Setting ErrFile to fd 2...
	I1210 07:51:14.495529 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495772 1078428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:51:14.496198 1078428 out.go:368] Setting JSON to false
	I1210 07:51:14.497022 1078428 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23599,"bootTime":1765329476,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:51:14.497081 1078428 start.go:143] virtualization:  
	I1210 07:51:14.500489 1078428 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:51:14.503586 1078428 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:51:14.503671 1078428 notify.go:221] Checking for updates...
	I1210 07:51:14.509469 1078428 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:51:14.512370 1078428 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:14.515169 1078428 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:51:14.518012 1078428 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:51:14.520797 1078428 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:51:14.527169 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:14.527731 1078428 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:51:14.566042 1078428 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:51:14.566172 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.628663 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.618086592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.628767 1078428 docker.go:319] overlay module found
	I1210 07:51:14.631981 1078428 out.go:179] * Using the docker driver based on existing profile
	I1210 07:51:14.634809 1078428 start.go:309] selected driver: docker
	I1210 07:51:14.634833 1078428 start.go:927] validating driver "docker" against &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.634946 1078428 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:51:14.635637 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.728404 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.713293715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.728788 1078428 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:51:14.728810 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:14.728854 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:14.728892 1078428 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.732274 1078428 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:51:14.735049 1078428 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:51:14.738088 1078428 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:51:14.740969 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:14.741011 1078428 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:51:14.741020 1078428 cache.go:65] Caching tarball of preloaded images
	I1210 07:51:14.741100 1078428 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:51:14.741110 1078428 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:51:14.741232 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:14.741437 1078428 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:51:14.763634 1078428 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:51:14.763653 1078428 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:51:14.763668 1078428 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:51:14.763698 1078428 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:14.763755 1078428 start.go:364] duration metric: took 40.304µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:51:14.763774 1078428 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:51:14.763779 1078428 fix.go:54] fixHost starting: 
	I1210 07:51:14.764055 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:14.807148 1078428 fix.go:112] recreateIfNeeded on newest-cni-237317: state=Stopped err=<nil>
	W1210 07:51:14.807188 1078428 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:51:14.810511 1078428 out.go:252] * Restarting existing docker container for "newest-cni-237317" ...
	I1210 07:51:14.810602 1078428 cli_runner.go:164] Run: docker start newest-cni-237317
	I1210 07:51:15.140257 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:15.163514 1078428 kic.go:430] container "newest-cni-237317" state is running.
	I1210 07:51:15.165120 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:15.200178 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:15.200425 1078428 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:15.200484 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:15.234652 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:15.234972 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:15.234980 1078428 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:15.238112 1078428 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:51:18.394621 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.394726 1078428 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:51:18.394818 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.424081 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.424400 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.424411 1078428 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:51:18.589360 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.589454 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.613196 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.613511 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.613536 1078428 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:18.750663 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:18.750693 1078428 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:18.750726 1078428 ubuntu.go:190] setting up certificates
	I1210 07:51:18.750745 1078428 provision.go:84] configureAuth start
	I1210 07:51:18.750808 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:18.768151 1078428 provision.go:143] copyHostCerts
	I1210 07:51:18.768234 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:18.768250 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:18.768328 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:18.768450 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:18.768462 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:18.768492 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:18.768566 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:18.768583 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:18.768617 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:18.768682 1078428 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:51:19.084729 1078428 provision.go:177] copyRemoteCerts
	I1210 07:51:19.084804 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:19.084849 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.104109 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.203019 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:19.223435 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:19.240802 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:19.257611 1078428 provision.go:87] duration metric: took 506.840522ms to configureAuth
	I1210 07:51:19.257643 1078428 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:19.257850 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:19.257864 1078428 machine.go:97] duration metric: took 4.057430572s to provisionDockerMachine
	I1210 07:51:19.257873 1078428 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:51:19.257887 1078428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:19.257947 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:19.257992 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.274867 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.371336 1078428 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:19.375463 1078428 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:19.375497 1078428 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:19.375509 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:19.375559 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:19.375641 1078428 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:19.375745 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:19.386080 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:19.406230 1078428 start.go:296] duration metric: took 148.339109ms for postStartSetup
	I1210 07:51:19.406314 1078428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:19.406379 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.424523 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.524843 1078428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:19.530920 1078428 fix.go:56] duration metric: took 4.767134196s for fixHost
	I1210 07:51:19.530943 1078428 start.go:83] releasing machines lock for "newest-cni-237317", held for 4.767180038s
	I1210 07:51:19.531010 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:19.550838 1078428 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:19.550877 1078428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:19.550890 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.550934 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.570871 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.573219 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.666233 1078428 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:19.757488 1078428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:19.762554 1078428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:19.762646 1078428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:19.772614 1078428 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:19.772688 1078428 start.go:496] detecting cgroup driver to use...
	I1210 07:51:19.772735 1078428 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:19.772810 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:19.790830 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:19.808563 1078428 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:19.808685 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:19.825219 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:19.839550 1078428 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:19.957848 1078428 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:20.106011 1078428 docker.go:234] disabling docker service ...
	I1210 07:51:20.106089 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:20.124597 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:20.139030 1078428 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:20.264730 1078428 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:20.405057 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:20.418041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:20.434060 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:20.443707 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:20.453162 1078428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:20.453287 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:20.462485 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.471477 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:20.480685 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.489771 1078428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:20.498259 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:20.507883 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:20.516803 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:20.525782 1078428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:20.533254 1078428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:20.540718 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:20.693669 1078428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:20.831153 1078428 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:20.831249 1078428 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:20.835049 1078428 start.go:564] Will wait 60s for crictl version
	I1210 07:51:20.835127 1078428 ssh_runner.go:195] Run: which crictl
	I1210 07:51:20.838628 1078428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:20.863125 1078428 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:20.863217 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.884709 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.910533 1078428 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:20.913646 1078428 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:20.930416 1078428 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:20.934716 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:20.948181 1078428 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:51:20.951046 1078428 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:20.951211 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:20.951303 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:20.976663 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:20.976691 1078428 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:51:20.976756 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:21.000721 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:21.000745 1078428 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:21.000753 1078428 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:21.000851 1078428 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:21.000919 1078428 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:21.027129 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:21.027160 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:21.027182 1078428 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:51:21.027206 1078428 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:21.027326 1078428 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:21.027402 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:21.035339 1078428 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:21.035477 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:21.043040 1078428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:21.056144 1078428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:21.068486 1078428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:51:21.080830 1078428 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:21.084334 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:21.093747 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:21.227754 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:21.255098 1078428 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:51:21.255120 1078428 certs.go:195] generating shared ca certs ...
	I1210 07:51:21.255146 1078428 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:21.255299 1078428 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:21.255358 1078428 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:21.255372 1078428 certs.go:257] generating profile certs ...
	I1210 07:51:21.255486 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:51:21.255553 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:51:21.255599 1078428 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:51:21.255719 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:21.255759 1078428 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:21.255770 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:21.255801 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:21.255838 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:21.255870 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:21.255919 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:21.256545 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:21.311093 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:21.352581 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:21.373410 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:21.394506 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:21.429692 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:51:21.462387 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:21.492668 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:51:21.520168 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:21.538625 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:21.556477 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:21.574823 1078428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:21.587970 1078428 ssh_runner.go:195] Run: openssl version
	I1210 07:51:21.594082 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.601606 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:21.609233 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613206 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613303 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.655122 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:21.662415 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.669633 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:21.677051 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680913 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680973 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.722892 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:21.730172 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.737341 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:21.744828 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748681 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748767 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.790554 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:21.797952 1078428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:21.801618 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:21.842558 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:21.883251 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:21.924099 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:21.965360 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:22.007244 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:22.049094 1078428 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:22.049233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:22.049334 1078428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:22.093879 1078428 cri.go:89] found id: ""
	I1210 07:51:22.094034 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:22.108858 1078428 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:22.108920 1078428 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:22.109002 1078428 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:22.119866 1078428 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:22.120478 1078428 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-237317" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.120794 1078428 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-237317" cluster setting kubeconfig missing "newest-cni-237317" context setting]
	I1210 07:51:22.121355 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.123034 1078428 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:22.139211 1078428 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:51:22.139284 1078428 kubeadm.go:602] duration metric: took 30.344057ms to restartPrimaryControlPlane
	I1210 07:51:22.139309 1078428 kubeadm.go:403] duration metric: took 90.22699ms to StartCluster
	I1210 07:51:22.139351 1078428 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.139430 1078428 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.140615 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.141197 1078428 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:22.141378 1078428 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:22.149299 1078428 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-237317"
	I1210 07:51:22.149322 1078428 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-237317"
	I1210 07:51:22.149353 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.149966 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.141985 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:22.150417 1078428 addons.go:70] Setting dashboard=true in profile "newest-cni-237317"
	I1210 07:51:22.150441 1078428 addons.go:239] Setting addon dashboard=true in "newest-cni-237317"
	W1210 07:51:22.150449 1078428 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:22.150502 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.151022 1078428 addons.go:70] Setting default-storageclass=true in profile "newest-cni-237317"
	I1210 07:51:22.151064 1078428 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-237317"
	I1210 07:51:22.151139 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.151406 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.154353 1078428 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:22.159801 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:22.209413 1078428 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:22.216779 1078428 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.216810 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:22.216899 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.223328 1078428 addons.go:239] Setting addon default-storageclass=true in "newest-cni-237317"
	I1210 07:51:22.223372 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.223787 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.224255 1078428 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:22.227259 1078428 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:22.230643 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:22.230670 1078428 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:22.230738 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.262205 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.304886 1078428 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:22.304913 1078428 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:22.305020 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.320571 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.350629 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.414331 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.428355 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:22.476480 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:22.476506 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:22.499604 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.511381 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.511434 1078428 retry.go:31] will retry after 354.449722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.512377 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:22.512398 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:22.525695 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:22.525721 1078428 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:22.549890 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:22.549921 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:22.571318 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:22.571360 1078428 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:22.590078 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:22.590107 1078428 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:22.605317 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:22.605341 1078428 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:22.618168 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:22.618200 1078428 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:22.632058 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.632138 1078428 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:22.645108 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.866802 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:23.047272 1078428 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:51:23.047355 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:23.047482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047505 1078428 retry.go:31] will retry after 239.047353ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047709 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047727 1078428 retry.go:31] will retry after 188.716917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047796 1078428 retry.go:31] will retry after 517.712293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.237633 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:23.287256 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.302152 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.302252 1078428 retry.go:31] will retry after 469.586518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.346821 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.346867 1078428 retry.go:31] will retry after 517.463027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.548102 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.566734 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:23.638131 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.638161 1078428 retry.go:31] will retry after 398.122111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.772509 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.859471 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.859510 1078428 retry.go:31] will retry after 826.751645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.865483 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.933950 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.933981 1078428 retry.go:31] will retry after 776.320293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.037254 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:24.047892 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:24.103304 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.103348 1078428 retry.go:31] will retry after 781.805737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.548307 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.687434 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:24.711319 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:24.773539 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.773577 1078428 retry.go:31] will retry after 997.771985ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:24.790786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.790863 1078428 retry.go:31] will retry after 982.839582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.886098 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.963470 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.963508 1078428 retry.go:31] will retry after 1.65409552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.047816 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.547590 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.771778 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:25.774151 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.936732 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.936801 1078428 retry.go:31] will retry after 1.015181303s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:25.947734 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.947767 1078428 retry.go:31] will retry after 1.482437442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.048146 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.547461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.617808 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:26.678401 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.678435 1078428 retry.go:31] will retry after 1.557494695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.952842 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.019482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.019568 1078428 retry.go:31] will retry after 1.273355747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.047573 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.431325 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:27.498014 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.498046 1078428 retry.go:31] will retry after 1.046464225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.548153 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.236708 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:28.293309 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:28.313086 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.313117 1078428 retry.go:31] will retry after 2.925748723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.376082 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.376136 1078428 retry.go:31] will retry after 3.458373128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.545585 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:28.548098 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:28.611335 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.611369 1078428 retry.go:31] will retry after 3.856495335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.047665 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.547947 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.047725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.548382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.048336 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.239688 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:31.305382 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.305411 1078428 retry.go:31] will retry after 5.48588333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.547900 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.835667 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:31.907250 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.907288 1078428 retry.go:31] will retry after 3.413940388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.047433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.468741 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:32.529582 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.529616 1078428 retry.go:31] will retry after 2.765741211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.547808 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.048388 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.547638 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.048299 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.547845 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.048329 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.295932 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:35.322379 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:35.361522 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.361555 1078428 retry.go:31] will retry after 3.648316362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:35.394430 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.394485 1078428 retry.go:31] will retry after 5.549499405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.547462 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.048235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.547640 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.792053 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:36.857078 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:36.857110 1078428 retry.go:31] will retry after 8.697501731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:37.048326 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.548396 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.047529 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.547464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.010651 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:39.048217 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:39.071638 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.071669 1078428 retry.go:31] will retry after 13.355816146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.547555 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.048271 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.548333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.944176 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:41.005827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.005869 1078428 retry.go:31] will retry after 6.58383212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.047819 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:41.547642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.048470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.547646 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.047482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.548313 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:44.048345 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:44.547780 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.048251 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.547682 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.555791 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:45.648631 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:45.648667 1078428 retry.go:31] will retry after 11.694093059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.048267 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:46.547745 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.047711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.547488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.590140 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:47.657175 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:47.657216 1078428 retry.go:31] will retry after 17.707179987s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:48.047554 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.547523 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:49.048229 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:49.547855 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.048310 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.547470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.048482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.547803 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.048220 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.428493 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:52.490932 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.490967 1078428 retry.go:31] will retry after 16.825164958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.548145 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.047509 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.548344 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.047578 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.547773 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.047551 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.547690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.047804 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.547512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.048500 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.343638 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:57.401827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.401862 1078428 retry.go:31] will retry after 12.086669618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.548118 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.547566 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:59.047512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:59.547820 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.048277 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.547702 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.047690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.548160 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.047532 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.547658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.048174 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.547494 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:04.047488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:04.547752 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.047684 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.364684 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.426426 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.426483 1078428 retry.go:31] will retry after 20.310563443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.547649 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.547647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.048386 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.548191 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.047499 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.547510 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.047557 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.316912 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:09.386785 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.386818 1078428 retry.go:31] will retry after 17.689212788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.489070 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:52:09.547482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:09.552880 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.552917 1078428 retry.go:31] will retry after 27.483688335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:10.047697 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:10.548124 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.047626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.548296 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.048335 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.548247 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.047495 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.547530 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:14.047549 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:14.547736 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.548227 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.047516 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.548114 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.047567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.547679 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.048185 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.548203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:19.047660 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:19.547978 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.048384 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.548389 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.048134 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.547434 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.048274 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.547540 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:22.547641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:22.572419 1078428 cri.go:89] found id: ""
	I1210 07:52:22.572446 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.572457 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:22.572464 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:22.572530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:22.596895 1078428 cri.go:89] found id: ""
	I1210 07:52:22.596923 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.596931 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:22.596938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:22.597000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:22.621678 1078428 cri.go:89] found id: ""
	I1210 07:52:22.621705 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.621713 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:22.621720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:22.621783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:22.646160 1078428 cri.go:89] found id: ""
	I1210 07:52:22.646188 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.646198 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:22.646205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:22.646270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:22.671641 1078428 cri.go:89] found id: ""
	I1210 07:52:22.671670 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.671680 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:22.671686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:22.671750 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:22.697149 1078428 cri.go:89] found id: ""
	I1210 07:52:22.697177 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.697187 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:22.697194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:22.697255 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:22.722276 1078428 cri.go:89] found id: ""
	I1210 07:52:22.722300 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.722318 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:22.722324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:22.722388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:22.751396 1078428 cri.go:89] found id: ""
	I1210 07:52:22.751422 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.751431 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:22.751440 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:22.751452 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:22.806571 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:22.806611 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:22.824584 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:22.824623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:22.902683 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:22.902704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:22.902719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:22.928289 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:22.928326 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:25.461464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:25.472201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:25.472303 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:25.498226 1078428 cri.go:89] found id: ""
	I1210 07:52:25.498253 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.498263 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:25.498269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:25.498331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:25.524731 1078428 cri.go:89] found id: ""
	I1210 07:52:25.524759 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.524777 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:25.524789 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:25.524855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:25.554155 1078428 cri.go:89] found id: ""
	I1210 07:52:25.554178 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.554187 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:25.554194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:25.554252 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:25.580553 1078428 cri.go:89] found id: ""
	I1210 07:52:25.580584 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.580593 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:25.580599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:25.580669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:25.606241 1078428 cri.go:89] found id: ""
	I1210 07:52:25.606309 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.606341 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:25.606369 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:25.606449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:25.630882 1078428 cri.go:89] found id: ""
	I1210 07:52:25.630912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.630921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:25.630928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:25.631028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:25.657178 1078428 cri.go:89] found id: ""
	I1210 07:52:25.657207 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.657215 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:25.657221 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:25.657282 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:25.686580 1078428 cri.go:89] found id: ""
	I1210 07:52:25.686604 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.686612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:25.686622 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:25.686634 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:25.737209 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:52:25.742985 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:25.743060 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:52:25.816909 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.817156 1078428 retry.go:31] will retry after 25.212576039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.818420 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:25.818454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:25.889855 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:25.889919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:25.889939 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:25.915022 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:25.915058 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:27.076870 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:27.134892 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:27.134924 1078428 retry.go:31] will retry after 48.20102621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:28.443268 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:28.454097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:28.454172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:28.482759 1078428 cri.go:89] found id: ""
	I1210 07:52:28.482789 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.482798 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:28.482805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:28.482868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:28.507737 1078428 cri.go:89] found id: ""
	I1210 07:52:28.507760 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.507769 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:28.507775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:28.507836 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:28.532881 1078428 cri.go:89] found id: ""
	I1210 07:52:28.532907 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.532916 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:28.532923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:28.532989 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:28.562425 1078428 cri.go:89] found id: ""
	I1210 07:52:28.562451 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.562460 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:28.562489 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:28.562551 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:28.587926 1078428 cri.go:89] found id: ""
	I1210 07:52:28.587952 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.587961 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:28.587967 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:28.588026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:28.613523 1078428 cri.go:89] found id: ""
	I1210 07:52:28.613593 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.613617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:28.613638 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:28.613730 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:28.637796 1078428 cri.go:89] found id: ""
	I1210 07:52:28.637864 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.637888 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:28.637907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:28.637993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:28.666907 1078428 cri.go:89] found id: ""
	I1210 07:52:28.666937 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.666946 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:28.666956 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:28.666968 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:28.722569 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:28.722604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:28.738517 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:28.738592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:28.814307 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:28.814366 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:28.814395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:28.842824 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:28.842905 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:31.380548 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:31.391083 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:31.391159 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:31.416470 1078428 cri.go:89] found id: ""
	I1210 07:52:31.416496 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.416504 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:31.416510 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:31.416570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:31.441740 1078428 cri.go:89] found id: ""
	I1210 07:52:31.441767 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.441776 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:31.441782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:31.441843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:31.465834 1078428 cri.go:89] found id: ""
	I1210 07:52:31.465860 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.465869 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:31.465875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:31.465935 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:31.492061 1078428 cri.go:89] found id: ""
	I1210 07:52:31.492085 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.492093 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:31.492099 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:31.492177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:31.515891 1078428 cri.go:89] found id: ""
	I1210 07:52:31.515971 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.515993 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:31.516010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:31.516096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:31.540039 1078428 cri.go:89] found id: ""
	I1210 07:52:31.540061 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.540069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:31.540076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:31.540169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:31.565345 1078428 cri.go:89] found id: ""
	I1210 07:52:31.565372 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.565388 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:31.565395 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:31.565513 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:31.590011 1078428 cri.go:89] found id: ""
	I1210 07:52:31.590035 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.590044 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:31.590074 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:31.590089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:31.656796 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:31.656816 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:31.656828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:31.681821 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:31.681855 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:31.709786 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:31.709815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:31.764688 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:31.764728 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.283681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:34.296241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:34.296314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:34.337179 1078428 cri.go:89] found id: ""
	I1210 07:52:34.337201 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.337210 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:34.337216 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:34.337274 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:34.369583 1078428 cri.go:89] found id: ""
	I1210 07:52:34.369611 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.369619 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:34.369625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:34.369683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:34.395566 1078428 cri.go:89] found id: ""
	I1210 07:52:34.395591 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.395600 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:34.395606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:34.395688 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:34.419610 1078428 cri.go:89] found id: ""
	I1210 07:52:34.419677 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.419702 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:34.419718 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:34.419797 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:34.444441 1078428 cri.go:89] found id: ""
	I1210 07:52:34.444511 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.444535 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:34.444550 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:34.444627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:34.469517 1078428 cri.go:89] found id: ""
	I1210 07:52:34.469540 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.469549 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:34.469556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:34.469618 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:34.494093 1078428 cri.go:89] found id: ""
	I1210 07:52:34.494120 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.494129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:34.494136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:34.494196 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:34.518575 1078428 cri.go:89] found id: ""
	I1210 07:52:34.518658 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.518674 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:34.518685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:34.518698 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.534743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:34.534770 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:34.597542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:34.597564 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:34.597577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:34.622841 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:34.622876 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:34.653362 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:34.653395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.036872 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:37.117418 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.117451 1078428 retry.go:31] will retry after 42.271832156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.209642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:37.220263 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:37.220360 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:37.244517 1078428 cri.go:89] found id: ""
	I1210 07:52:37.244544 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.244552 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:37.244558 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:37.244619 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:37.269073 1078428 cri.go:89] found id: ""
	I1210 07:52:37.269099 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.269108 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:37.269114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:37.269175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:37.292561 1078428 cri.go:89] found id: ""
	I1210 07:52:37.292587 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.292596 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:37.292604 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:37.292661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:37.330286 1078428 cri.go:89] found id: ""
	I1210 07:52:37.330312 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.330321 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:37.330328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:37.330388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:37.362527 1078428 cri.go:89] found id: ""
	I1210 07:52:37.362555 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.362564 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:37.362570 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:37.362633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:37.387887 1078428 cri.go:89] found id: ""
	I1210 07:52:37.387912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.387921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:37.387927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:37.387988 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:37.412303 1078428 cri.go:89] found id: ""
	I1210 07:52:37.412329 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.412337 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:37.412344 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:37.412451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:37.436571 1078428 cri.go:89] found id: ""
	I1210 07:52:37.436596 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.436605 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:37.436614 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:37.436626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:37.462030 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:37.462074 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:37.489847 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:37.489875 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.545757 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:37.545792 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:37.561730 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:37.561763 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:37.627065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:40.127737 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:40.139792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:40.139876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:40.166917 1078428 cri.go:89] found id: ""
	I1210 07:52:40.166944 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.166952 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:40.166964 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:40.167028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:40.193972 1078428 cri.go:89] found id: ""
	I1210 07:52:40.194000 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.194009 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:40.194015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:40.194111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:40.226660 1078428 cri.go:89] found id: ""
	I1210 07:52:40.226693 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.226702 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:40.226709 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:40.226774 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:40.257013 1078428 cri.go:89] found id: ""
	I1210 07:52:40.257056 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.257067 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:40.257074 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:40.257140 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:40.282449 1078428 cri.go:89] found id: ""
	I1210 07:52:40.282500 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.282509 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:40.282516 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:40.282580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:40.332986 1078428 cri.go:89] found id: ""
	I1210 07:52:40.333018 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.333027 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:40.333050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:40.333188 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:40.366223 1078428 cri.go:89] found id: ""
	I1210 07:52:40.366258 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.366268 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:40.366275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:40.366347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:40.393136 1078428 cri.go:89] found id: ""
	I1210 07:52:40.393163 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.393171 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:40.393181 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:40.393193 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:40.422285 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:40.422314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:40.481326 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:40.481365 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:40.497675 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:40.497725 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:40.562074 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:40.562093 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:40.562106 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:43.088690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:43.099750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:43.099828 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:43.124516 1078428 cri.go:89] found id: ""
	I1210 07:52:43.124552 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.124561 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:43.124567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:43.124628 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:43.153325 1078428 cri.go:89] found id: ""
	I1210 07:52:43.153347 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.153356 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:43.153362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:43.153423 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:43.178405 1078428 cri.go:89] found id: ""
	I1210 07:52:43.178429 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.178437 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:43.178443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:43.178609 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:43.201768 1078428 cri.go:89] found id: ""
	I1210 07:52:43.201791 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.201800 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:43.201806 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:43.201865 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:43.225907 1078428 cri.go:89] found id: ""
	I1210 07:52:43.225931 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.225940 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:43.225946 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:43.226004 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:43.250803 1078428 cri.go:89] found id: ""
	I1210 07:52:43.250828 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.250837 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:43.250843 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:43.250916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:43.275081 1078428 cri.go:89] found id: ""
	I1210 07:52:43.275147 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.275161 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:43.275168 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:43.275245 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:43.306794 1078428 cri.go:89] found id: ""
	I1210 07:52:43.306827 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.306836 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:43.306845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:43.306857 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:43.337826 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:43.337854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:43.396050 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:43.396089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:43.413002 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:43.413031 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:43.479541 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:43.479565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:43.479578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:46.005454 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:46.017579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:46.017658 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:46.053539 1078428 cri.go:89] found id: ""
	I1210 07:52:46.053570 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.053579 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:46.053585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:46.053649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:46.088548 1078428 cri.go:89] found id: ""
	I1210 07:52:46.088572 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.088581 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:46.088596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:46.088660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:46.126497 1078428 cri.go:89] found id: ""
	I1210 07:52:46.126571 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.126594 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:46.126613 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:46.126734 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:46.150556 1078428 cri.go:89] found id: ""
	I1210 07:52:46.150626 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.150643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:46.150651 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:46.150719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:46.174996 1078428 cri.go:89] found id: ""
	I1210 07:52:46.175019 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.175027 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:46.175033 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:46.175107 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:46.199701 1078428 cri.go:89] found id: ""
	I1210 07:52:46.199726 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.199735 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:46.199742 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:46.199845 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:46.224632 1078428 cri.go:89] found id: ""
	I1210 07:52:46.224657 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.224666 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:46.224672 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:46.224752 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:46.248234 1078428 cri.go:89] found id: ""
	I1210 07:52:46.248259 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.248267 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:46.248277 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:46.248334 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:46.264183 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:46.264221 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:46.342979 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:46.343063 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:46.343092 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:46.369476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:46.369511 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:46.397302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:46.397339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.952567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.962857 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.962931 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.992562 1078428 cri.go:89] found id: ""
	I1210 07:52:48.992589 1078428 logs.go:282] 0 containers: []
	W1210 07:52:48.992599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.992606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.992671 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:49.018277 1078428 cri.go:89] found id: ""
	I1210 07:52:49.018303 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.018312 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:49.018318 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:49.018387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:49.045715 1078428 cri.go:89] found id: ""
	I1210 07:52:49.045743 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.045752 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:49.045758 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:49.045826 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:49.083318 1078428 cri.go:89] found id: ""
	I1210 07:52:49.083348 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.083358 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:49.083364 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:49.083422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:49.109936 1078428 cri.go:89] found id: ""
	I1210 07:52:49.109958 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.109966 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:49.109989 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:49.110049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:49.134580 1078428 cri.go:89] found id: ""
	I1210 07:52:49.134607 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.134617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:49.134623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:49.134681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:49.159828 1078428 cri.go:89] found id: ""
	I1210 07:52:49.159906 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.159924 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:49.159931 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:49.160011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:49.184837 1078428 cri.go:89] found id: ""
	I1210 07:52:49.184862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.184872 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:49.184881 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:49.184902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:49.210656 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:49.210691 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:49.241224 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:49.241256 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:49.303253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:49.303297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:49.319808 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:49.319838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:49.389423 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:51.030067 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:51.093289 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:51.093415 1078428 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:51.889686 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.900249 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.900353 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.925533 1078428 cri.go:89] found id: ""
	I1210 07:52:51.925559 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.925567 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.925621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.925706 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.950161 1078428 cri.go:89] found id: ""
	I1210 07:52:51.950186 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.950194 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.950201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.950280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.976938 1078428 cri.go:89] found id: ""
	I1210 07:52:51.976964 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.976972 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.976979 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.977038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:52.006745 1078428 cri.go:89] found id: ""
	I1210 07:52:52.006841 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.006865 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:52.006887 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:52.007015 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:52.033557 1078428 cri.go:89] found id: ""
	I1210 07:52:52.033585 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.033595 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:52.033601 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:52.033672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:52.066821 1078428 cri.go:89] found id: ""
	I1210 07:52:52.066850 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.066860 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:52.066867 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:52.066929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:52.101024 1078428 cri.go:89] found id: ""
	I1210 07:52:52.101051 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.101060 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:52.101067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:52.101128 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:52.130045 1078428 cri.go:89] found id: ""
	I1210 07:52:52.130070 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.130079 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:52.130088 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:52.130100 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:52.184627 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:52.184662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:52.200733 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:52.200759 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:52.265577 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:52.265610 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:52.265626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:52.291354 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:52.291390 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:54.834203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:54.845400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:54.845510 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.871357 1078428 cri.go:89] found id: ""
	I1210 07:52:54.871383 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.871392 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.871399 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.871463 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.897322 1078428 cri.go:89] found id: ""
	I1210 07:52:54.897352 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.897360 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.897366 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.897425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.922291 1078428 cri.go:89] found id: ""
	I1210 07:52:54.922320 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.922329 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.922334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.922405 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.947056 1078428 cri.go:89] found id: ""
	I1210 07:52:54.947080 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.947089 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.947095 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.947155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.972572 1078428 cri.go:89] found id: ""
	I1210 07:52:54.972599 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.972608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.972614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.972675 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.997657 1078428 cri.go:89] found id: ""
	I1210 07:52:54.997685 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.997694 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.997700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.997777 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:55.025796 1078428 cri.go:89] found id: ""
	I1210 07:52:55.025819 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.025829 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:55.025835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:55.026185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:55.069593 1078428 cri.go:89] found id: ""
	I1210 07:52:55.069631 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.069640 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:55.069649 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:55.069662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:55.135748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:55.135788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:55.151784 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:55.151815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:55.220457 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:55.220480 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:55.220495 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:55.245834 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:55.245869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.774707 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:57.785110 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:57.785178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:57.810275 1078428 cri.go:89] found id: ""
	I1210 07:52:57.810302 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.810320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:57.810328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:57.810389 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.838839 1078428 cri.go:89] found id: ""
	I1210 07:52:57.838862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.838871 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.838877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.838937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.863185 1078428 cri.go:89] found id: ""
	I1210 07:52:57.863212 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.863221 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.863227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.863287 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.890204 1078428 cri.go:89] found id: ""
	I1210 07:52:57.890234 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.890244 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.890250 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.890314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.916593 1078428 cri.go:89] found id: ""
	I1210 07:52:57.916616 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.916624 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.916630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.916690 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.940351 1078428 cri.go:89] found id: ""
	I1210 07:52:57.940373 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.940381 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.940387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.940448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.965417 1078428 cri.go:89] found id: ""
	I1210 07:52:57.965453 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.965462 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.965469 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:57.965535 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:57.989157 1078428 cri.go:89] found id: ""
	I1210 07:52:57.989183 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.989192 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:57.989202 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:57.989213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:58.015326 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:58.015366 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:58.055222 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:58.055248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:58.115866 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:58.115945 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:58.131823 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:58.131852 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:58.196880 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:00.697148 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:00.707593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:00.707661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:00.735938 1078428 cri.go:89] found id: ""
	I1210 07:53:00.735962 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.735971 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:00.735977 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:00.736039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:00.759785 1078428 cri.go:89] found id: ""
	I1210 07:53:00.759808 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.759817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:00.759823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:00.759887 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:00.784529 1078428 cri.go:89] found id: ""
	I1210 07:53:00.784552 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.784561 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:00.784567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:00.784641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.813420 1078428 cri.go:89] found id: ""
	I1210 07:53:00.813443 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.813452 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.813459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.813518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.838413 1078428 cri.go:89] found id: ""
	I1210 07:53:00.838439 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.838449 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.838455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.838559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.862923 1078428 cri.go:89] found id: ""
	I1210 07:53:00.862949 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.862968 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.862975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.863034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.890339 1078428 cri.go:89] found id: ""
	I1210 07:53:00.890366 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.890375 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.890381 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:00.890440 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:00.916963 1078428 cri.go:89] found id: ""
	I1210 07:53:00.916992 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.917001 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:00.917010 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.917022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.972565 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.972601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:00.990064 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.990154 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:01.068497 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:01.068521 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:01.068534 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:01.097602 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:01.097641 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.628666 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.639440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.639518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.664498 1078428 cri.go:89] found id: ""
	I1210 07:53:03.664523 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.664531 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.664538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.664601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.688357 1078428 cri.go:89] found id: ""
	I1210 07:53:03.688382 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.688391 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.688397 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.688460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.712874 1078428 cri.go:89] found id: ""
	I1210 07:53:03.712898 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.712906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.712913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.712990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.737610 1078428 cri.go:89] found id: ""
	I1210 07:53:03.737635 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.737643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.737650 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.737712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.762668 1078428 cri.go:89] found id: ""
	I1210 07:53:03.762695 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.762703 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.762710 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.762769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.795710 1078428 cri.go:89] found id: ""
	I1210 07:53:03.795732 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.795741 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.795747 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.795809 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.819247 1078428 cri.go:89] found id: ""
	I1210 07:53:03.819275 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.819285 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.819291 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:03.819355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:03.842854 1078428 cri.go:89] found id: ""
	I1210 07:53:03.842881 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.842891 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:03.842900 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.842911 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.858681 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.858748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.922352 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.922383 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:03.922401 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:03.948481 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.948520 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.977218 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.977247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.532410 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.544357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.544451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.576472 1078428 cri.go:89] found id: ""
	I1210 07:53:06.576500 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.576511 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.576517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.576581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.609024 1078428 cri.go:89] found id: ""
	I1210 07:53:06.609051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.609061 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.609067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.609134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.636182 1078428 cri.go:89] found id: ""
	I1210 07:53:06.636209 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.636218 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.636224 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.636286 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.664610 1078428 cri.go:89] found id: ""
	I1210 07:53:06.664677 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.664699 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.664720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.664812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.690522 1078428 cri.go:89] found id: ""
	I1210 07:53:06.690548 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.690557 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.690564 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.690626 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.716006 1078428 cri.go:89] found id: ""
	I1210 07:53:06.716035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.716044 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.716050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.716115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.740705 1078428 cri.go:89] found id: ""
	I1210 07:53:06.740726 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.740734 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.740741 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:06.740803 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:06.764831 1078428 cri.go:89] found id: ""
	I1210 07:53:06.764852 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.764860 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:06.764869 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.764881 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.820337 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.820372 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:06.836899 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.836931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.902143 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.902164 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:06.902178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:06.927253 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.927289 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.458854 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:09.469382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:09.469466 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.494769 1078428 cri.go:89] found id: ""
	I1210 07:53:09.494791 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.494799 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.494805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.494866 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:09.520347 1078428 cri.go:89] found id: ""
	I1210 07:53:09.520374 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.520383 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.520390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.520454 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.549983 1078428 cri.go:89] found id: ""
	I1210 07:53:09.550010 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.550019 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.550025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.550085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.588794 1078428 cri.go:89] found id: ""
	I1210 07:53:09.588821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.588830 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.588836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.588895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.617370 1078428 cri.go:89] found id: ""
	I1210 07:53:09.617393 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.617401 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.617407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.617465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.645730 1078428 cri.go:89] found id: ""
	I1210 07:53:09.645755 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.645779 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.645786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.645850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.672062 1078428 cri.go:89] found id: ""
	I1210 07:53:09.672088 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.672097 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.672103 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:09.672174 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:09.695770 1078428 cri.go:89] found id: ""
	I1210 07:53:09.695793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.695802 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:09.695811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:09.695822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:09.721144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.721180 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.748337 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.748367 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.802348 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.802384 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.818196 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.818226 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.884770 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.385627 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:12.396288 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:12.396367 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:12.421158 1078428 cri.go:89] found id: ""
	I1210 07:53:12.421194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.421204 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:12.421210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:12.421281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:12.446171 1078428 cri.go:89] found id: ""
	I1210 07:53:12.446206 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.446216 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:12.446222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:12.446294 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.470791 1078428 cri.go:89] found id: ""
	I1210 07:53:12.470818 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.470828 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.470836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.470895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.499441 1078428 cri.go:89] found id: ""
	I1210 07:53:12.499467 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.499476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.499483 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.499561 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.524188 1078428 cri.go:89] found id: ""
	I1210 07:53:12.524211 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.524219 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.524225 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.524285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.550501 1078428 cri.go:89] found id: ""
	I1210 07:53:12.550528 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.550537 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.550543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.550617 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.578576 1078428 cri.go:89] found id: ""
	I1210 07:53:12.578602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.578611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.578616 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:12.578687 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:12.612078 1078428 cri.go:89] found id: ""
	I1210 07:53:12.612113 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.612122 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:12.612132 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.612144 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:12.645096 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.645125 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.700179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.700217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.715578 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.715606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.781369 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.781391 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:12.781403 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:15.306176 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:15.317232 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:15.317315 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:15.336640 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:53:15.353595 1078428 cri.go:89] found id: ""
	I1210 07:53:15.353626 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.353635 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:15.353642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:15.353703 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1210 07:53:15.421893 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:15.421994 1078428 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:15.422157 1078428 cri.go:89] found id: ""
	I1210 07:53:15.422177 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.422185 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:15.422192 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:15.422270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:15.447660 1078428 cri.go:89] found id: ""
	I1210 07:53:15.447684 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.447693 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:15.447699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:15.447763 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:15.471893 1078428 cri.go:89] found id: ""
	I1210 07:53:15.471918 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.471927 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:15.471934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:15.472003 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.496880 1078428 cri.go:89] found id: ""
	I1210 07:53:15.496915 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.496924 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.496930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.496999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.525007 1078428 cri.go:89] found id: ""
	I1210 07:53:15.525043 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.525055 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.525061 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.525138 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.556732 1078428 cri.go:89] found id: ""
	I1210 07:53:15.556776 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.556785 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.556792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:15.556864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:15.592802 1078428 cri.go:89] found id: ""
	I1210 07:53:15.592835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.592844 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:15.592854 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.592866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.660809 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.660846 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.677009 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.677040 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.743204 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.743227 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:15.743239 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:15.768020 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.768053 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:18.297028 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:18.310128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:18.310198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:18.340476 1078428 cri.go:89] found id: ""
	I1210 07:53:18.340572 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.340599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:18.340642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:18.340769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:18.369516 1078428 cri.go:89] found id: ""
	I1210 07:53:18.369582 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.369614 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:18.369633 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:18.369753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:18.396295 1078428 cri.go:89] found id: ""
	I1210 07:53:18.396321 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.396330 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:18.396336 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:18.396428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:18.422012 1078428 cri.go:89] found id: ""
	I1210 07:53:18.422037 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.422046 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:18.422052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:18.422164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:18.446495 1078428 cri.go:89] found id: ""
	I1210 07:53:18.446518 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.446526 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:18.446532 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:18.446600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:18.471650 1078428 cri.go:89] found id: ""
	I1210 07:53:18.471674 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.471682 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:18.471688 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:18.471779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.495591 1078428 cri.go:89] found id: ""
	I1210 07:53:18.495616 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.495624 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.495631 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:18.495694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:18.523464 1078428 cri.go:89] found id: ""
	I1210 07:53:18.523489 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.523497 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:18.523506 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.523518 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.585434 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.585481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.610315 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.610344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.674572 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.674593 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:18.674607 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:18.699401 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.699435 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:19.389521 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:53:19.452005 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:19.452105 1078428 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:19.455408 1078428 out.go:179] * Enabled addons: 
	I1210 07:53:19.458237 1078428 addons.go:530] duration metric: took 1m57.316864384s for enable addons: enabled=[]
	I1210 07:53:21.227168 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:21.237506 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:21.237577 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:21.261812 1078428 cri.go:89] found id: ""
	I1210 07:53:21.261842 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.261852 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:21.261858 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:21.261921 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:21.289741 1078428 cri.go:89] found id: ""
	I1210 07:53:21.289767 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.289787 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:21.289794 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:21.289855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:21.331373 1078428 cri.go:89] found id: ""
	I1210 07:53:21.331400 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.331410 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:21.331415 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:21.331534 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:21.364401 1078428 cri.go:89] found id: ""
	I1210 07:53:21.364427 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.364436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:21.364443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:21.364504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:21.395936 1078428 cri.go:89] found id: ""
	I1210 07:53:21.395965 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.395975 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:21.395981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:21.396044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:21.420965 1078428 cri.go:89] found id: ""
	I1210 07:53:21.420996 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.421005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:21.421012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:21.421073 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:21.446318 1078428 cri.go:89] found id: ""
	I1210 07:53:21.446345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.446354 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:21.446360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:21.446422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:21.475470 1078428 cri.go:89] found id: ""
	I1210 07:53:21.475499 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.475509 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:21.475521 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:21.475537 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.530313 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.530354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.548651 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.548737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.632055 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.632137 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:21.632157 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:21.659428 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.659466 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.192421 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:24.203056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:24.203137 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:24.232457 1078428 cri.go:89] found id: ""
	I1210 07:53:24.232493 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.232502 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:24.232509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:24.232576 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:24.260730 1078428 cri.go:89] found id: ""
	I1210 07:53:24.260758 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.260768 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:24.260774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:24.260837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:24.284981 1078428 cri.go:89] found id: ""
	I1210 07:53:24.285009 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.285018 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:24.285024 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:24.285086 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:24.316578 1078428 cri.go:89] found id: ""
	I1210 07:53:24.316604 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.316613 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:24.316619 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:24.316678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:24.353587 1078428 cri.go:89] found id: ""
	I1210 07:53:24.353622 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.353638 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:24.353645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:24.353740 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:24.384460 1078428 cri.go:89] found id: ""
	I1210 07:53:24.384483 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.384492 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:24.384498 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:24.384562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:24.414252 1078428 cri.go:89] found id: ""
	I1210 07:53:24.414280 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.414290 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:24.414296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:24.414361 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:24.442225 1078428 cri.go:89] found id: ""
	I1210 07:53:24.442247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.442256 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:24.442265 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:24.442276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:24.467596 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.467629 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.499949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.499977 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.558185 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.558223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:24.576232 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:24.576264 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:24.646699 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.148382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:27.158984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:27.159102 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:27.183857 1078428 cri.go:89] found id: ""
	I1210 07:53:27.183927 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.183943 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:27.183951 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:27.184028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:27.207461 1078428 cri.go:89] found id: ""
	I1210 07:53:27.207529 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.207554 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:27.207568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:27.207645 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:27.234849 1078428 cri.go:89] found id: ""
	I1210 07:53:27.234876 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.234884 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:27.234890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:27.234948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:27.258887 1078428 cri.go:89] found id: ""
	I1210 07:53:27.258910 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.258919 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:27.258926 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:27.258983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:27.283113 1078428 cri.go:89] found id: ""
	I1210 07:53:27.283189 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.283206 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:27.283214 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:27.283283 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:27.324968 1078428 cri.go:89] found id: ""
	I1210 07:53:27.324994 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.325004 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:27.325010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:27.325070 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:27.355711 1078428 cri.go:89] found id: ""
	I1210 07:53:27.355739 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.355749 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:27.355755 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:27.355817 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:27.383387 1078428 cri.go:89] found id: ""
	I1210 07:53:27.383424 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.383435 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:27.383445 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:27.383456 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:27.408324 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:27.408363 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:27.438348 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:27.438424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:27.496282 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:27.496317 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:27.512354 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:27.512385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.586988 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:30.088030 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:30.100373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:30.100449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:30.127922 1078428 cri.go:89] found id: ""
	I1210 07:53:30.127998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.128023 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:30.128041 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:30.128120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:30.160672 1078428 cri.go:89] found id: ""
	I1210 07:53:30.160699 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.160709 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:30.160722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:30.160784 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:30.186050 1078428 cri.go:89] found id: ""
	I1210 07:53:30.186077 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.186086 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:30.186093 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:30.186157 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:30.211107 1078428 cri.go:89] found id: ""
	I1210 07:53:30.211132 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.211141 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:30.211147 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:30.211213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:30.235571 1078428 cri.go:89] found id: ""
	I1210 07:53:30.235598 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.235608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:30.235615 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:30.235678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:30.264308 1078428 cri.go:89] found id: ""
	I1210 07:53:30.264331 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.264339 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:30.264346 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:30.264413 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:30.288489 1078428 cri.go:89] found id: ""
	I1210 07:53:30.288557 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.288581 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:30.288594 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:30.288673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:30.318600 1078428 cri.go:89] found id: ""
	I1210 07:53:30.318628 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.318638 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:30.318648 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:30.318679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:30.359074 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:30.359103 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.417146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.417182 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:30.432931 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:30.432960 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:30.497452 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:30.497474 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:30.497487 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.027579 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:33.038128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:33.038197 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:33.063535 1078428 cri.go:89] found id: ""
	I1210 07:53:33.063560 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.063572 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:33.063578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:33.063642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:33.087384 1078428 cri.go:89] found id: ""
	I1210 07:53:33.087406 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.087414 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:33.087420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:33.087478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:33.112186 1078428 cri.go:89] found id: ""
	I1210 07:53:33.112247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.112258 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:33.112265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:33.112326 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:33.136102 1078428 cri.go:89] found id: ""
	I1210 07:53:33.136125 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.136133 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:33.136139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:33.136202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:33.160865 1078428 cri.go:89] found id: ""
	I1210 07:53:33.160931 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.160957 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:33.160986 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:33.161071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:33.185964 1078428 cri.go:89] found id: ""
	I1210 07:53:33.186031 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.186054 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:33.186075 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:33.186150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:33.211060 1078428 cri.go:89] found id: ""
	I1210 07:53:33.211086 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.211095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:33.211100 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:33.211180 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:33.236111 1078428 cri.go:89] found id: ""
	I1210 07:53:33.236180 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.236213 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:33.236227 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.236251 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:33.252003 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:33.252029 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:33.315902 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:33.315967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:33.316003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.342524 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:33.342604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:33.377391 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:33.377419 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:35.933860 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.945070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.945142 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.971394 1078428 cri.go:89] found id: ""
	I1210 07:53:35.971423 1078428 logs.go:282] 0 containers: []
	W1210 07:53:35.971432 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.971438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.971501 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:36.005170 1078428 cri.go:89] found id: ""
	I1210 07:53:36.005227 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.005240 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:36.005248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:36.005329 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:36.035275 1078428 cri.go:89] found id: ""
	I1210 07:53:36.035299 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.035307 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:36.035313 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:36.035380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:36.060232 1078428 cri.go:89] found id: ""
	I1210 07:53:36.060255 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.060266 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:36.060272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:36.060336 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:36.084825 1078428 cri.go:89] found id: ""
	I1210 07:53:36.084850 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.084859 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:36.084866 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:36.084955 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:36.110606 1078428 cri.go:89] found id: ""
	I1210 07:53:36.110630 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.110639 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:36.110664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:36.110728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:36.139205 1078428 cri.go:89] found id: ""
	I1210 07:53:36.139232 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.139241 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:36.139248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:36.139358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:36.165255 1078428 cri.go:89] found id: ""
	I1210 07:53:36.165279 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.165287 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:36.165296 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:36.165308 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:36.190967 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:36.191003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:36.228036 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:36.228070 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:36.283588 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:36.283626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:36.308631 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:36.308660 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:36.382721 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.882925 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.893611 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.893738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.919385 1078428 cri.go:89] found id: ""
	I1210 07:53:38.919418 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.919427 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.919433 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.919504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.943787 1078428 cri.go:89] found id: ""
	I1210 07:53:38.943814 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.943824 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.943832 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.943896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.968361 1078428 cri.go:89] found id: ""
	I1210 07:53:38.968433 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.968451 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.968458 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.968520 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.995636 1078428 cri.go:89] found id: ""
	I1210 07:53:38.995661 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.995670 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.995677 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.995754 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:39.021416 1078428 cri.go:89] found id: ""
	I1210 07:53:39.021452 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.021462 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:39.021470 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:39.021552 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:39.048415 1078428 cri.go:89] found id: ""
	I1210 07:53:39.048441 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.048450 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:39.048456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:39.048545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:39.074528 1078428 cri.go:89] found id: ""
	I1210 07:53:39.074554 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.074563 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:39.074569 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:39.074633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:39.099525 1078428 cri.go:89] found id: ""
	I1210 07:53:39.099551 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.099571 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:39.099581 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:39.099594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:39.166056 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:39.166080 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:39.166094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:39.191445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:39.191482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:39.221901 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:39.221931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:39.276698 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:39.276735 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:41.793231 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.806351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.806419 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.833486 1078428 cri.go:89] found id: ""
	I1210 07:53:41.833508 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.833517 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.833523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.833587 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.863627 1078428 cri.go:89] found id: ""
	I1210 07:53:41.863650 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.863659 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.863665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.863723 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.891468 1078428 cri.go:89] found id: ""
	I1210 07:53:41.891492 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.891502 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.891509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.891575 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.916517 1078428 cri.go:89] found id: ""
	I1210 07:53:41.916542 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.916550 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.916557 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.916616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.942528 1078428 cri.go:89] found id: ""
	I1210 07:53:41.942555 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.942577 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.942584 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.942646 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.966600 1078428 cri.go:89] found id: ""
	I1210 07:53:41.966624 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.966633 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.966639 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.966707 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.990797 1078428 cri.go:89] found id: ""
	I1210 07:53:41.990831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.990840 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.990846 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:41.990914 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:42.024121 1078428 cri.go:89] found id: ""
	I1210 07:53:42.024148 1078428 logs.go:282] 0 containers: []
	W1210 07:53:42.024158 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:42.024169 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:42.024181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:42.080753 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:42.080799 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:42.098930 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:42.098965 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:42.176005 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:42.176075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:42.176108 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:42.205998 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:42.206045 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:44.740690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.751788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.751908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.777536 1078428 cri.go:89] found id: ""
	I1210 07:53:44.777563 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.777571 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.777578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.777640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.805133 1078428 cri.go:89] found id: ""
	I1210 07:53:44.805161 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.805170 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.805176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.805237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.842340 1078428 cri.go:89] found id: ""
	I1210 07:53:44.842368 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.842383 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.842390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.842451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.875009 1078428 cri.go:89] found id: ""
	I1210 07:53:44.875035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.875044 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.875050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.875144 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.900854 1078428 cri.go:89] found id: ""
	I1210 07:53:44.900880 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.900889 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.900895 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.900993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.926168 1078428 cri.go:89] found id: ""
	I1210 07:53:44.926194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.926203 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.926210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.926302 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.951565 1078428 cri.go:89] found id: ""
	I1210 07:53:44.951590 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.951599 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.951605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:44.951700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:44.981123 1078428 cri.go:89] found id: ""
	I1210 07:53:44.981151 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.981160 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:44.981170 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.981181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:45.061176 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:45.061213 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:45.061227 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:45.119245 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:45.119283 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:45.172398 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:45.172430 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:45.255583 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:45.255726 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.779428 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.790537 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.790611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.831579 1078428 cri.go:89] found id: ""
	I1210 07:53:47.831602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.831610 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.831617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.831677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.859808 1078428 cri.go:89] found id: ""
	I1210 07:53:47.859835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.859844 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.859850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.859916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.885720 1078428 cri.go:89] found id: ""
	I1210 07:53:47.885745 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.885754 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.885761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.885829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.910568 1078428 cri.go:89] found id: ""
	I1210 07:53:47.910594 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.910604 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.910610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.910668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.934447 1078428 cri.go:89] found id: ""
	I1210 07:53:47.934495 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.934505 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.934511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.934571 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.959745 1078428 cri.go:89] found id: ""
	I1210 07:53:47.959772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.959782 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.959788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.959871 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.984059 1078428 cri.go:89] found id: ""
	I1210 07:53:47.984085 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.984095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.984102 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:47.984163 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:48.011978 1078428 cri.go:89] found id: ""
	I1210 07:53:48.012007 1078428 logs.go:282] 0 containers: []
	W1210 07:53:48.012018 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:48.012030 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:48.012043 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:48.069700 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:48.069738 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:48.086303 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:48.086345 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:48.160973 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:48.160994 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:48.161008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:48.185832 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:48.185868 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:50.713469 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.724372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.724452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.750268 1078428 cri.go:89] found id: ""
	I1210 07:53:50.750292 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.750300 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.750306 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.750368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.776624 1078428 cri.go:89] found id: ""
	I1210 07:53:50.776689 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.776704 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.776711 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.776769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.807024 1078428 cri.go:89] found id: ""
	I1210 07:53:50.807051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.807060 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.807070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.807127 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.851753 1078428 cri.go:89] found id: ""
	I1210 07:53:50.851831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.851855 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.851879 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.852000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.878419 1078428 cri.go:89] found id: ""
	I1210 07:53:50.878571 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.878589 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.878597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.878667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.904710 1078428 cri.go:89] found id: ""
	I1210 07:53:50.904741 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.904750 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.904756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.904819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.929368 1078428 cri.go:89] found id: ""
	I1210 07:53:50.929398 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.929421 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.929428 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:50.929495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:50.956973 1078428 cri.go:89] found id: ""
	I1210 07:53:50.956998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.957006 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:50.957016 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:50.957028 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:50.982743 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.982778 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:51.015675 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:51.015706 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:51.072656 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:51.072697 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:51.089028 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:51.089115 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:51.156089 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.657305 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.668282 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.668364 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.693314 1078428 cri.go:89] found id: ""
	I1210 07:53:53.693340 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.693349 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.693356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.693417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.718128 1078428 cri.go:89] found id: ""
	I1210 07:53:53.718154 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.718169 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.718176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.718234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.744359 1078428 cri.go:89] found id: ""
	I1210 07:53:53.744397 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.744406 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.744412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.744485 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.773658 1078428 cri.go:89] found id: ""
	I1210 07:53:53.773737 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.773760 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.773782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.773879 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.804702 1078428 cri.go:89] found id: ""
	I1210 07:53:53.804772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.804796 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.804815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.804905 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.840639 1078428 cri.go:89] found id: ""
	I1210 07:53:53.840706 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.840730 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.840753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.840846 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.869303 1078428 cri.go:89] found id: ""
	I1210 07:53:53.869373 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.869397 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.869419 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:53.869508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:53.898651 1078428 cri.go:89] found id: ""
	I1210 07:53:53.898742 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.898764 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:53.898787 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:53.898821 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:53.924144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.924181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.953086 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.953118 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:54.008451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:54.008555 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:54.027281 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:54.027312 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:54.091065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.591259 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.602391 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.602493 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.627566 1078428 cri.go:89] found id: ""
	I1210 07:53:56.627597 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.627607 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.627614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.627677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.654900 1078428 cri.go:89] found id: ""
	I1210 07:53:56.654928 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.654937 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.654944 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.655007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.679562 1078428 cri.go:89] found id: ""
	I1210 07:53:56.679592 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.679606 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.679612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.679737 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.703320 1078428 cri.go:89] found id: ""
	I1210 07:53:56.703345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.703355 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.703361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.703420 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.731538 1078428 cri.go:89] found id: ""
	I1210 07:53:56.731564 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.731573 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.731579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.731664 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.756416 1078428 cri.go:89] found id: ""
	I1210 07:53:56.756442 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.756451 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.756457 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.756523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.785074 1078428 cri.go:89] found id: ""
	I1210 07:53:56.785097 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.785106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.785111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:56.785171 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:56.815793 1078428 cri.go:89] found id: ""
	I1210 07:53:56.815821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.815831 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:56.815842 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.815856 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.834351 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.834380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.907823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.907857 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:56.907871 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:56.933197 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.933233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.964346 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.964378 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.520946 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.531324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.531414 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.563870 1078428 cri.go:89] found id: ""
	I1210 07:53:59.563897 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.563907 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.563913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.564000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.593355 1078428 cri.go:89] found id: ""
	I1210 07:53:59.593385 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.593394 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.593400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.593468 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.620235 1078428 cri.go:89] found id: ""
	I1210 07:53:59.620263 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.620272 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.620278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.620338 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.645074 1078428 cri.go:89] found id: ""
	I1210 07:53:59.645099 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.645108 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.645114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.645178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.673804 1078428 cri.go:89] found id: ""
	I1210 07:53:59.673830 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.673839 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.673845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.673902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.697766 1078428 cri.go:89] found id: ""
	I1210 07:53:59.697793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.697803 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.697810 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.697868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.725582 1078428 cri.go:89] found id: ""
	I1210 07:53:59.725608 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.725617 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.725623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:59.725681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:59.750402 1078428 cri.go:89] found id: ""
	I1210 07:53:59.750428 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.750437 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:59.750447 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:59.750458 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:59.775346 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.775383 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.815776 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.815804 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.876120 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.876164 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.897440 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.897470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.962486 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.463154 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.473950 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.474039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.498884 1078428 cri.go:89] found id: ""
	I1210 07:54:02.498907 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.498916 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.498923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.498982 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.523553 1078428 cri.go:89] found id: ""
	I1210 07:54:02.523582 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.523591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.523597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.552876 1078428 cri.go:89] found id: ""
	I1210 07:54:02.552902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.552911 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.552918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.552976 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.583793 1078428 cri.go:89] found id: ""
	I1210 07:54:02.583818 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.583827 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.583833 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.583895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.625932 1078428 cri.go:89] found id: ""
	I1210 07:54:02.625959 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.625969 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.625976 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.626044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.652709 1078428 cri.go:89] found id: ""
	I1210 07:54:02.652784 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.652800 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.652808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.652868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.680830 1078428 cri.go:89] found id: ""
	I1210 07:54:02.680859 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.680868 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.680874 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:02.680933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:02.706663 1078428 cri.go:89] found id: ""
	I1210 07:54:02.706687 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.706696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:02.706704 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.706715 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.763069 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.763105 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.779309 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.779340 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.864302 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.864326 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:02.864339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:02.890235 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.890274 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.418128 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.429523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.429604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.456726 1078428 cri.go:89] found id: ""
	I1210 07:54:05.456755 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.456765 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.456772 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.456851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.485039 1078428 cri.go:89] found id: ""
	I1210 07:54:05.485065 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.485074 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.485080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.485169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.510634 1078428 cri.go:89] found id: ""
	I1210 07:54:05.510658 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.510668 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.510674 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.510733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.536710 1078428 cri.go:89] found id: ""
	I1210 07:54:05.536743 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.536753 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.536760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.536848 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.568911 1078428 cri.go:89] found id: ""
	I1210 07:54:05.568991 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.569015 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.569040 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.569150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.598888 1078428 cri.go:89] found id: ""
	I1210 07:54:05.598964 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.598987 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.599007 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.599101 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.630665 1078428 cri.go:89] found id: ""
	I1210 07:54:05.630741 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.630771 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.630779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:05.630850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:05.654676 1078428 cri.go:89] found id: ""
	I1210 07:54:05.654702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.654712 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:05.654722 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.654733 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.712685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.712722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.728743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.728774 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.807287 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.807311 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:05.807325 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:05.835209 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.835246 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.367017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.377830 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.377904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.402753 1078428 cri.go:89] found id: ""
	I1210 07:54:08.402778 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.402787 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.402795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.402856 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.427920 1078428 cri.go:89] found id: ""
	I1210 07:54:08.427947 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.427956 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.427963 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.428021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.453012 1078428 cri.go:89] found id: ""
	I1210 07:54:08.453037 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.453045 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.453052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.453114 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.477565 1078428 cri.go:89] found id: ""
	I1210 07:54:08.477591 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.477606 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.477612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.477673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.501669 1078428 cri.go:89] found id: ""
	I1210 07:54:08.501694 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.501740 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.501750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.501816 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.530594 1078428 cri.go:89] found id: ""
	I1210 07:54:08.530667 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.530704 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.530719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.530799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.561145 1078428 cri.go:89] found id: ""
	I1210 07:54:08.561171 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.561179 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.561186 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:08.561244 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:08.595663 1078428 cri.go:89] found id: ""
	I1210 07:54:08.595686 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.595695 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:08.595706 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:08.595718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:08.622963 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.623002 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.652801 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.652829 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.708272 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.708307 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.724144 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.724174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.790000 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.291584 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.302037 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.302111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.331607 1078428 cri.go:89] found id: ""
	I1210 07:54:11.331631 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.331640 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.331646 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.331711 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.355008 1078428 cri.go:89] found id: ""
	I1210 07:54:11.355031 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.355039 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.355045 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.355104 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.380347 1078428 cri.go:89] found id: ""
	I1210 07:54:11.380423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.380463 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.380485 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.380572 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.410797 1078428 cri.go:89] found id: ""
	I1210 07:54:11.410824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.410834 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.410840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.410898 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.435927 1078428 cri.go:89] found id: ""
	I1210 07:54:11.435996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.436021 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.436035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.436109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.461484 1078428 cri.go:89] found id: ""
	I1210 07:54:11.461520 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.461529 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.461536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.461603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.486793 1078428 cri.go:89] found id: ""
	I1210 07:54:11.486817 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.486825 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.486831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:11.486890 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:11.515338 1078428 cri.go:89] found id: ""
	I1210 07:54:11.515364 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.515374 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:11.515384 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.515396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.593473 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.593495 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:11.593509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:11.619492 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.619523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.646739 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.646771 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.701149 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.701187 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.217342 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:14.228228 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:14.228306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.254323 1078428 cri.go:89] found id: ""
	I1210 07:54:14.254360 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.254369 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.254375 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.254443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.279268 1078428 cri.go:89] found id: ""
	I1210 07:54:14.279295 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.279303 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.279310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.279397 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.304531 1078428 cri.go:89] found id: ""
	I1210 07:54:14.304558 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.304567 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.304574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.304647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.329458 1078428 cri.go:89] found id: ""
	I1210 07:54:14.329487 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.329496 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.329502 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.329563 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.359168 1078428 cri.go:89] found id: ""
	I1210 07:54:14.359241 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.359258 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.359266 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.359348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.386391 1078428 cri.go:89] found id: ""
	I1210 07:54:14.386426 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.386435 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.386442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.386540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.411808 1078428 cri.go:89] found id: ""
	I1210 07:54:14.411843 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.411862 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.411870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:14.411946 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:14.440262 1078428 cri.go:89] found id: ""
	I1210 07:54:14.440292 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.440301 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:14.440311 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.440322 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:14.496340 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.496376 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.512934 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.512963 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.584969 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.585042 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:14.585069 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:14.615045 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.615086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.146612 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:17.157236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:17.157307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:17.184080 1078428 cri.go:89] found id: ""
	I1210 07:54:17.184102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.184111 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:17.184117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:17.184177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:17.212720 1078428 cri.go:89] found id: ""
	I1210 07:54:17.212745 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.212754 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:17.212760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:17.212822 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.238495 1078428 cri.go:89] found id: ""
	I1210 07:54:17.238521 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.238529 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.238542 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.238603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.262892 1078428 cri.go:89] found id: ""
	I1210 07:54:17.262921 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.262930 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.262936 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.262996 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.291473 1078428 cri.go:89] found id: ""
	I1210 07:54:17.291498 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.291508 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.291514 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.291573 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.317108 1078428 cri.go:89] found id: ""
	I1210 07:54:17.317133 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.317142 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.317149 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.317209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.344918 1078428 cri.go:89] found id: ""
	I1210 07:54:17.344944 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.344953 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.344959 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:17.345019 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:17.370082 1078428 cri.go:89] found id: ""
	I1210 07:54:17.370109 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.370118 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:17.370128 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.370139 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:17.427357 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.427407 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.443363 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.443393 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.509516 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.509538 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:17.509551 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:17.535043 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.535078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:20.071194 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:20.083928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:20.084059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:20.119958 1078428 cri.go:89] found id: ""
	I1210 07:54:20.119987 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.119996 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:20.120002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:20.120062 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:20.144861 1078428 cri.go:89] found id: ""
	I1210 07:54:20.144883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.144891 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:20.144897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:20.144957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:20.180042 1078428 cri.go:89] found id: ""
	I1210 07:54:20.180069 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.180078 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:20.180085 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:20.180151 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.208390 1078428 cri.go:89] found id: ""
	I1210 07:54:20.208423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.208432 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.208439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.208511 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.234337 1078428 cri.go:89] found id: ""
	I1210 07:54:20.234358 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.234367 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.234373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.234441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.263116 1078428 cri.go:89] found id: ""
	I1210 07:54:20.263138 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.263146 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.263153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.263213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.287115 1078428 cri.go:89] found id: ""
	I1210 07:54:20.287188 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.287203 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.287210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:20.287281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:20.312391 1078428 cri.go:89] found id: ""
	I1210 07:54:20.312415 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.312423 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:20.312432 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.312443 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.369802 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.369838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.387018 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.387099 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.458731 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.458801 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:20.458828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:20.483627 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.483662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:23.014658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:23.025123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:23.025235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:23.060798 1078428 cri.go:89] found id: ""
	I1210 07:54:23.060872 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.060909 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:23.060934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:23.061025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:23.092890 1078428 cri.go:89] found id: ""
	I1210 07:54:23.092965 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.092987 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:23.093018 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:23.093129 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:23.122215 1078428 cri.go:89] found id: ""
	I1210 07:54:23.122290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.122314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:23.122335 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:23.122418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:23.147080 1078428 cri.go:89] found id: ""
	I1210 07:54:23.147108 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.147117 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:23.147123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:23.147213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.171020 1078428 cri.go:89] found id: ""
	I1210 07:54:23.171043 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.171052 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.171064 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.171120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.195821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.195889 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.195914 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.195929 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.196016 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.219821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.219901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.219926 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.219941 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:23.220025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:23.248052 1078428 cri.go:89] found id: ""
	I1210 07:54:23.248079 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.248088 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:23.248098 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.248109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.305179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.305215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.321081 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.321111 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.391528 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.391553 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:23.391565 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:23.416476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.416509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:25.951859 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.962115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.962185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.986216 1078428 cri.go:89] found id: ""
	I1210 07:54:25.986286 1078428 logs.go:282] 0 containers: []
	W1210 07:54:25.986310 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.986334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.986426 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:26.011668 1078428 cri.go:89] found id: ""
	I1210 07:54:26.011696 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.011705 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:26.011712 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:26.011773 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:26.037538 1078428 cri.go:89] found id: ""
	I1210 07:54:26.037560 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.037569 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:26.037575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:26.037634 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:26.066974 1078428 cri.go:89] found id: ""
	I1210 07:54:26.066996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.067006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:26.067013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:26.067071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:26.100870 1078428 cri.go:89] found id: ""
	I1210 07:54:26.100892 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.100901 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:26.100907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:26.100966 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.130861 1078428 cri.go:89] found id: ""
	I1210 07:54:26.130883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.130891 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.130897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.130957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.156407 1078428 cri.go:89] found id: ""
	I1210 07:54:26.156429 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.156438 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.156444 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:26.156502 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:26.182081 1078428 cri.go:89] found id: ""
	I1210 07:54:26.182102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.182110 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:26.182119 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.182133 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.239878 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.239917 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.259189 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.259219 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.328449 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.328475 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:26.328490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:26.353246 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.353278 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.882607 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.893420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.893495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.917577 1078428 cri.go:89] found id: ""
	I1210 07:54:28.917603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.917611 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.917617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.917677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.949094 1078428 cri.go:89] found id: ""
	I1210 07:54:28.949123 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.949132 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.949138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.949202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.976683 1078428 cri.go:89] found id: ""
	I1210 07:54:28.976708 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.976716 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.976722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.976783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:29.001326 1078428 cri.go:89] found id: ""
	I1210 07:54:29.001395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.001420 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:29.001440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:29.001526 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:29.026870 1078428 cri.go:89] found id: ""
	I1210 07:54:29.026894 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.026903 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:29.026909 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:29.026992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:29.059072 1078428 cri.go:89] found id: ""
	I1210 07:54:29.059106 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.059115 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:29.059122 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:29.059190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:29.089329 1078428 cri.go:89] found id: ""
	I1210 07:54:29.089363 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.089372 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:29.089379 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:29.089446 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:29.116648 1078428 cri.go:89] found id: ""
	I1210 07:54:29.116671 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.116680 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:29.116689 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:29.116701 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:29.141429 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.141465 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.168073 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.168102 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:29.223128 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:29.223165 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:29.239118 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:29.239149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.304306 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:31.805827 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.819227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.819305 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.852872 1078428 cri.go:89] found id: ""
	I1210 07:54:31.852901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.852910 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.852916 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.852973 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.881145 1078428 cri.go:89] found id: ""
	I1210 07:54:31.881173 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.881182 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.881188 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.881249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.907195 1078428 cri.go:89] found id: ""
	I1210 07:54:31.907218 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.907227 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.907233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.907292 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.931775 1078428 cri.go:89] found id: ""
	I1210 07:54:31.931799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.931808 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.931814 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.931876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.957735 1078428 cri.go:89] found id: ""
	I1210 07:54:31.957764 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.957772 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.957779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.957837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.982202 1078428 cri.go:89] found id: ""
	I1210 07:54:31.982285 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.982308 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.982334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.982441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:32.011091 1078428 cri.go:89] found id: ""
	I1210 07:54:32.011119 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.011129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:32.011138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:32.011205 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:32.039293 1078428 cri.go:89] found id: ""
	I1210 07:54:32.039371 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.039388 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:32.039399 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:32.039410 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:32.067441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.067482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:32.105238 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:32.105273 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.164873 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.164913 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.181394 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.181477 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.250195 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:34.751129 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.761490 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.761559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.785680 1078428 cri.go:89] found id: ""
	I1210 07:54:34.785702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.785711 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.785716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.785775 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.820785 1078428 cri.go:89] found id: ""
	I1210 07:54:34.820809 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.820817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.820823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.820892 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.852508 1078428 cri.go:89] found id: ""
	I1210 07:54:34.852531 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.852539 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.852545 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.852604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.879064 1078428 cri.go:89] found id: ""
	I1210 07:54:34.879095 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.879104 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.879111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.879179 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.908815 1078428 cri.go:89] found id: ""
	I1210 07:54:34.908849 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.908858 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.908864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.908933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.939793 1078428 cri.go:89] found id: ""
	I1210 07:54:34.939820 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.939831 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.939838 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.939902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.966660 1078428 cri.go:89] found id: ""
	I1210 07:54:34.966730 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.966754 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.966775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:34.966877 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:34.997175 1078428 cri.go:89] found id: ""
	I1210 07:54:34.997202 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.997211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:34.997221 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:34.997233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:35.054362 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:35.054504 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:35.071310 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:35.071339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.154263 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.154285 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:35.154298 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:35.184377 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.184427 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:37.716479 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.727384 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.727475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.758151 1078428 cri.go:89] found id: ""
	I1210 07:54:37.758175 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.758183 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.758189 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.758249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.783547 1078428 cri.go:89] found id: ""
	I1210 07:54:37.783572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.783580 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.783586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.783652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.824269 1078428 cri.go:89] found id: ""
	I1210 07:54:37.824302 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.824320 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.824326 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.824392 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.859292 1078428 cri.go:89] found id: ""
	I1210 07:54:37.859315 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.859324 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.859332 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.859391 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.887370 1078428 cri.go:89] found id: ""
	I1210 07:54:37.887395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.887404 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.887411 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.887471 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.912568 1078428 cri.go:89] found id: ""
	I1210 07:54:37.912590 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.912599 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.912605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.912667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.942226 1078428 cri.go:89] found id: ""
	I1210 07:54:37.942294 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.942321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.942341 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:37.942416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:37.967116 1078428 cri.go:89] found id: ""
	I1210 07:54:37.967186 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.967211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:37.967234 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:37.967261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.026081 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.026123 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:38.044051 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:38.044086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:38.137383 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:38.137408 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:38.137420 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:38.163137 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.163174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:40.692712 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.705786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:40.705862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:40.730857 1078428 cri.go:89] found id: ""
	I1210 07:54:40.730881 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.730890 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:40.730896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:40.730956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:40.759374 1078428 cri.go:89] found id: ""
	I1210 07:54:40.759401 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.759410 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:40.759417 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:40.759481 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:40.784874 1078428 cri.go:89] found id: ""
	I1210 07:54:40.784898 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.784906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:40.784912 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:40.784972 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:40.829615 1078428 cri.go:89] found id: ""
	I1210 07:54:40.829638 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.829648 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:40.829655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:40.829714 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:40.855514 1078428 cri.go:89] found id: ""
	I1210 07:54:40.855537 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.855547 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:40.855553 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:40.855622 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:40.880645 1078428 cri.go:89] found id: ""
	I1210 07:54:40.880674 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.880683 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:40.880699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:40.880762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:40.908526 1078428 cri.go:89] found id: ""
	I1210 07:54:40.908553 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.908562 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:40.908568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:40.908627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:40.933389 1078428 cri.go:89] found id: ""
	I1210 07:54:40.933417 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.933427 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:40.933466 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:40.933485 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:40.989429 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:40.989508 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:41.005657 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:41.005748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:41.093001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:41.093075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:41.093107 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:41.120941 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:41.121022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:43.650332 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:43.660886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:43.660957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:43.685546 1078428 cri.go:89] found id: ""
	I1210 07:54:43.685572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.685582 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:43.685590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:43.685652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:43.710551 1078428 cri.go:89] found id: ""
	I1210 07:54:43.710575 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.710584 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:43.710590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:43.710651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:43.735321 1078428 cri.go:89] found id: ""
	I1210 07:54:43.735347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.735357 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:43.735363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:43.735422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:43.760265 1078428 cri.go:89] found id: ""
	I1210 07:54:43.760290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.760299 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:43.760305 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:43.760371 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:43.785386 1078428 cri.go:89] found id: ""
	I1210 07:54:43.785412 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.785421 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:43.785427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:43.785491 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:43.812278 1078428 cri.go:89] found id: ""
	I1210 07:54:43.812305 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.812323 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:43.812331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:43.812390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:43.844260 1078428 cri.go:89] found id: ""
	I1210 07:54:43.844288 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.844297 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:43.844303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:43.844374 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:43.878456 1078428 cri.go:89] found id: ""
	I1210 07:54:43.878503 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.878512 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:43.878522 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:43.878533 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:43.934467 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:43.934503 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:43.951761 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:43.951790 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:44.019672 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:44.019739 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:44.019764 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:44.045374 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:44.045448 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:46.583553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:46.594544 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:46.594614 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:46.620989 1078428 cri.go:89] found id: ""
	I1210 07:54:46.621016 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.621026 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:46.621032 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:46.621092 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:46.646885 1078428 cri.go:89] found id: ""
	I1210 07:54:46.646912 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.646921 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:46.646927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:46.646993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:46.671522 1078428 cri.go:89] found id: ""
	I1210 07:54:46.671545 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.671555 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:46.671561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:46.671627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:46.697035 1078428 cri.go:89] found id: ""
	I1210 07:54:46.697057 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.697066 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:46.697076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:46.697135 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:46.721985 1078428 cri.go:89] found id: ""
	I1210 07:54:46.722008 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.722016 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:46.722023 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:46.722081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:46.750862 1078428 cri.go:89] found id: ""
	I1210 07:54:46.750885 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.750894 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:46.750900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:46.750957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:46.775321 1078428 cri.go:89] found id: ""
	I1210 07:54:46.775347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.775357 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:46.775363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:46.775422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:46.804576 1078428 cri.go:89] found id: ""
	I1210 07:54:46.804603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.804612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:46.804624 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:46.804635 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:46.869024 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:46.869059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:46.887039 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:46.887068 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:46.955257 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:46.955281 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:46.955294 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:46.981722 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:46.981766 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:49.512895 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:49.523585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:49.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:49.553762 1078428 cri.go:89] found id: ""
	I1210 07:54:49.553799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.553809 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:49.553815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:49.553883 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:49.584365 1078428 cri.go:89] found id: ""
	I1210 07:54:49.584397 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.584406 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:49.584412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:49.584473 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:49.609054 1078428 cri.go:89] found id: ""
	I1210 07:54:49.609078 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.609088 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:49.609094 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:49.609153 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:49.633506 1078428 cri.go:89] found id: ""
	I1210 07:54:49.633585 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.633612 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:49.633632 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:49.633727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:49.660681 1078428 cri.go:89] found id: ""
	I1210 07:54:49.660705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.660713 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:49.660719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:49.660779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:49.684429 1078428 cri.go:89] found id: ""
	I1210 07:54:49.684456 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.684465 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:49.684472 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:49.684559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:49.708792 1078428 cri.go:89] found id: ""
	I1210 07:54:49.708825 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.708834 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:49.708841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:49.708907 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:49.733028 1078428 cri.go:89] found id: ""
	I1210 07:54:49.733061 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.733070 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:49.733080 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:49.733093 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:49.788419 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:49.788454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:49.806199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:49.806229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:49.890193 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:49.890216 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:49.890229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:49.916164 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:49.916201 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.445192 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:52.455938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:52.456011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:52.483578 1078428 cri.go:89] found id: ""
	I1210 07:54:52.483607 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.483615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:52.483622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:52.483681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:52.508996 1078428 cri.go:89] found id: ""
	I1210 07:54:52.509019 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.509028 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:52.509035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:52.509100 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:52.534163 1078428 cri.go:89] found id: ""
	I1210 07:54:52.534189 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.534197 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:52.534204 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:52.534262 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:52.559446 1078428 cri.go:89] found id: ""
	I1210 07:54:52.559468 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.559476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:52.559482 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:52.559538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:52.585685 1078428 cri.go:89] found id: ""
	I1210 07:54:52.585705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.585714 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:52.585720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:52.585781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:52.610362 1078428 cri.go:89] found id: ""
	I1210 07:54:52.610387 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.610396 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:52.610429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:52.610553 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:52.639114 1078428 cri.go:89] found id: ""
	I1210 07:54:52.639140 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.639149 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:52.639155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:52.639239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:52.669083 1078428 cri.go:89] found id: ""
	I1210 07:54:52.669111 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.669120 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:52.669129 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:52.669141 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:52.684926 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:52.684953 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:52.749001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:52.749025 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:52.749037 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:52.773227 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:52.773261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.804197 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:52.804276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:55.368759 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:55.379351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:55.379439 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:55.403912 1078428 cri.go:89] found id: ""
	I1210 07:54:55.403937 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.403946 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:55.403953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:55.404021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:55.432879 1078428 cri.go:89] found id: ""
	I1210 07:54:55.432902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.432912 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:55.432918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:55.432981 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:55.457499 1078428 cri.go:89] found id: ""
	I1210 07:54:55.457528 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.457537 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:55.457546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:55.457605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:55.482796 1078428 cri.go:89] found id: ""
	I1210 07:54:55.482824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.482833 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:55.482840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:55.482900 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:55.508135 1078428 cri.go:89] found id: ""
	I1210 07:54:55.508158 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.508167 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:55.508173 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:55.508239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:55.532757 1078428 cri.go:89] found id: ""
	I1210 07:54:55.532828 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.532849 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:55.532856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:55.532923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:55.558383 1078428 cri.go:89] found id: ""
	I1210 07:54:55.558408 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.558431 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:55.558437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:55.558540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:55.584737 1078428 cri.go:89] found id: ""
	I1210 07:54:55.584768 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.584780 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:55.584790 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:55.584802 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:55.611899 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:55.611929 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:55.667940 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:55.667974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:55.683872 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:55.683902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:55.753488 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:55.753511 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:55.753523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.279433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:58.290275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:58.290358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:58.315732 1078428 cri.go:89] found id: ""
	I1210 07:54:58.315760 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.315769 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:58.315775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:58.315840 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:58.354970 1078428 cri.go:89] found id: ""
	I1210 07:54:58.354993 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.355002 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:58.355009 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:58.355080 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:58.387261 1078428 cri.go:89] found id: ""
	I1210 07:54:58.387290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.387300 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:58.387307 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:58.387366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:58.415659 1078428 cri.go:89] found id: ""
	I1210 07:54:58.415683 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.415691 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:58.415698 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:58.415762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:58.440257 1078428 cri.go:89] found id: ""
	I1210 07:54:58.440283 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.440292 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:58.440298 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:58.440380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:58.465572 1078428 cri.go:89] found id: ""
	I1210 07:54:58.465598 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.465607 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:58.465614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:58.465672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:58.490288 1078428 cri.go:89] found id: ""
	I1210 07:54:58.490313 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.490321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:58.490327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:58.490384 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:58.516549 1078428 cri.go:89] found id: ""
	I1210 07:54:58.516572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.516580 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:58.516590 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:58.516601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.542195 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:58.542234 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:58.570592 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:58.570623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:58.627983 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:58.628020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:58.644192 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:58.644218 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:58.708892 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:01.209184 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:01.221080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:01.221155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:01.250125 1078428 cri.go:89] found id: ""
	I1210 07:55:01.250154 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.250163 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:01.250178 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:01.250240 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:01.276827 1078428 cri.go:89] found id: ""
	I1210 07:55:01.276854 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.276869 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:01.276876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:01.276938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:01.311772 1078428 cri.go:89] found id: ""
	I1210 07:55:01.311808 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.311818 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:01.311824 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:01.311894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:01.344006 1078428 cri.go:89] found id: ""
	I1210 07:55:01.344042 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.344052 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:01.344059 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:01.344131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:01.370453 1078428 cri.go:89] found id: ""
	I1210 07:55:01.370508 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.370517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:01.370524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:01.370596 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:01.396784 1078428 cri.go:89] found id: ""
	I1210 07:55:01.396811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.396833 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:01.396840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:01.396925 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:01.427026 1078428 cri.go:89] found id: ""
	I1210 07:55:01.427053 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.427064 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:01.427076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:01.427145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:01.453716 1078428 cri.go:89] found id: ""
	I1210 07:55:01.453745 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.453755 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:01.453765 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:01.453787 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:01.483021 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:01.483048 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:01.538363 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:01.538402 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:01.555879 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:01.555912 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:01.624093 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:01.624120 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:01.624136 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.151461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:04.161982 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:04.162052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:04.187914 1078428 cri.go:89] found id: ""
	I1210 07:55:04.187940 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.187955 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:04.187961 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:04.188020 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:04.212016 1078428 cri.go:89] found id: ""
	I1210 07:55:04.212039 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.212048 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:04.212054 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:04.212113 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:04.237062 1078428 cri.go:89] found id: ""
	I1210 07:55:04.237088 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.237098 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:04.237107 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:04.237166 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:04.262844 1078428 cri.go:89] found id: ""
	I1210 07:55:04.262867 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.262876 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:04.262883 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:04.262943 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:04.288099 1078428 cri.go:89] found id: ""
	I1210 07:55:04.288125 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.288134 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:04.288140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:04.288198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:04.315819 1078428 cri.go:89] found id: ""
	I1210 07:55:04.315846 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.315855 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:04.315861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:04.315923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:04.349897 1078428 cri.go:89] found id: ""
	I1210 07:55:04.349919 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.349928 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:04.349934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:04.349992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:04.374228 1078428 cri.go:89] found id: ""
	I1210 07:55:04.374255 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.374264 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:04.374274 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:04.374285 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:04.430541 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:04.430576 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:04.446913 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:04.446947 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:04.519646 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:04.519667 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:04.519679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.545056 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:04.545097 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:07.074592 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:07.085572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:07.085640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:07.111394 1078428 cri.go:89] found id: ""
	I1210 07:55:07.111418 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.111426 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:07.111432 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:07.111497 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:07.135823 1078428 cri.go:89] found id: ""
	I1210 07:55:07.135848 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.135857 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:07.135864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:07.135923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:07.164275 1078428 cri.go:89] found id: ""
	I1210 07:55:07.164297 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.164306 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:07.164311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:07.164385 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:07.193334 1078428 cri.go:89] found id: ""
	I1210 07:55:07.193358 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.193367 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:07.193373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:07.193429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:07.217929 1078428 cri.go:89] found id: ""
	I1210 07:55:07.217955 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.217964 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:07.217970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:07.218032 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:07.243152 1078428 cri.go:89] found id: ""
	I1210 07:55:07.243176 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.243185 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:07.243191 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:07.243251 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:07.270888 1078428 cri.go:89] found id: ""
	I1210 07:55:07.270918 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.270927 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:07.270934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:07.270992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:07.304504 1078428 cri.go:89] found id: ""
	I1210 07:55:07.304531 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.304540 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:07.304549 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:07.304561 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:07.370744 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:07.370786 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:07.386532 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:07.386606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:07.450870 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:07.450892 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:07.450906 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:07.476441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:07.476476 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:10.006374 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:10.031408 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:10.031500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:10.072527 1078428 cri.go:89] found id: ""
	I1210 07:55:10.072558 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.072568 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:10.072575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:10.072637 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:10.107560 1078428 cri.go:89] found id: ""
	I1210 07:55:10.107605 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.107615 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:10.107621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:10.107694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:10.138416 1078428 cri.go:89] found id: ""
	I1210 07:55:10.138441 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.138450 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:10.138456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:10.138547 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:10.163271 1078428 cri.go:89] found id: ""
	I1210 07:55:10.163294 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.163303 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:10.163309 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:10.163372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:10.193549 1078428 cri.go:89] found id: ""
	I1210 07:55:10.193625 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.193637 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:10.193664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:10.193766 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:10.225083 1078428 cri.go:89] found id: ""
	I1210 07:55:10.225169 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.225182 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:10.225212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:10.225307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:10.251042 1078428 cri.go:89] found id: ""
	I1210 07:55:10.251067 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.251082 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:10.251089 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:10.251175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:10.275656 1078428 cri.go:89] found id: ""
	I1210 07:55:10.275681 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.275690 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:10.275699 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:10.275711 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:10.335591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:10.335628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:10.352546 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:10.352577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:10.421057 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:10.421081 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:10.421094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:10.446445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:10.446578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:12.978285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:12.988877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:12.988951 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:13.014715 1078428 cri.go:89] found id: ""
	I1210 07:55:13.014738 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.014746 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:13.014753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:13.014812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:13.039187 1078428 cri.go:89] found id: ""
	I1210 07:55:13.039217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.039226 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:13.039231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:13.039293 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:13.079663 1078428 cri.go:89] found id: ""
	I1210 07:55:13.079687 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.079696 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:13.079702 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:13.079762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:13.116097 1078428 cri.go:89] found id: ""
	I1210 07:55:13.116118 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.116127 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:13.116133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:13.116190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:13.141856 1078428 cri.go:89] found id: ""
	I1210 07:55:13.141921 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.141946 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:13.141973 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:13.142049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:13.166245 1078428 cri.go:89] found id: ""
	I1210 07:55:13.166318 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.166341 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:13.166361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:13.166452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:13.190766 1078428 cri.go:89] found id: ""
	I1210 07:55:13.190790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.190799 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:13.190805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:13.190864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:13.218179 1078428 cri.go:89] found id: ""
	I1210 07:55:13.218217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.218227 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:13.218253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:13.218270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:13.234044 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:13.234082 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:13.303134 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:13.303158 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:13.303170 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:13.330980 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:13.331017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:13.358836 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:13.358865 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:15.922613 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:15.933295 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:15.933370 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:15.958341 1078428 cri.go:89] found id: ""
	I1210 07:55:15.958364 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.958373 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:15.958378 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:15.958434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:15.983285 1078428 cri.go:89] found id: ""
	I1210 07:55:15.983309 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.983324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:15.983330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:15.983387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:16.008789 1078428 cri.go:89] found id: ""
	I1210 07:55:16.008816 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.008825 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:16.008831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:16.008926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:16.035859 1078428 cri.go:89] found id: ""
	I1210 07:55:16.035931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.035946 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:16.035955 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:16.036022 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:16.068655 1078428 cri.go:89] found id: ""
	I1210 07:55:16.068688 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.068697 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:16.068704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:16.068776 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:16.106754 1078428 cri.go:89] found id: ""
	I1210 07:55:16.106780 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.106790 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:16.106796 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:16.106862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:16.133097 1078428 cri.go:89] found id: ""
	I1210 07:55:16.133124 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.133133 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:16.133139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:16.133207 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:16.157892 1078428 cri.go:89] found id: ""
	I1210 07:55:16.157938 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.157947 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:16.157957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:16.157970 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:16.212808 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:16.212848 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:16.228781 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:16.228813 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:16.291789 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:16.291811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:16.291823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:16.319342 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:16.319380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:18.855190 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:18.865732 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:18.865807 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:18.889830 1078428 cri.go:89] found id: ""
	I1210 07:55:18.889855 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.889864 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:18.889871 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:18.889936 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:18.914345 1078428 cri.go:89] found id: ""
	I1210 07:55:18.914370 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.914379 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:18.914385 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:18.914444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:18.939221 1078428 cri.go:89] found id: ""
	I1210 07:55:18.939243 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.939253 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:18.939258 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:18.939316 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:18.967766 1078428 cri.go:89] found id: ""
	I1210 07:55:18.967788 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.967796 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:18.967803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:18.967867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:18.996962 1078428 cri.go:89] found id: ""
	I1210 07:55:18.996984 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.996992 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:18.996999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:18.997055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:19.023004 1078428 cri.go:89] found id: ""
	I1210 07:55:19.023031 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.023043 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:19.023052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:19.023115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:19.057510 1078428 cri.go:89] found id: ""
	I1210 07:55:19.057540 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.057549 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:19.057555 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:19.057611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:19.092862 1078428 cri.go:89] found id: ""
	I1210 07:55:19.092891 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.092900 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:19.092910 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:19.092921 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:19.150597 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:19.150632 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:19.166174 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:19.166252 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:19.232235 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:19.232259 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:19.232272 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:19.256392 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:19.256424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:21.783358 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:21.793821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:21.793896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:21.818542 1078428 cri.go:89] found id: ""
	I1210 07:55:21.818564 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.818573 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:21.818580 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:21.818639 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:21.842392 1078428 cri.go:89] found id: ""
	I1210 07:55:21.842414 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.842423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:21.842429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:21.842509 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:21.869909 1078428 cri.go:89] found id: ""
	I1210 07:55:21.869931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.869940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:21.869947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:21.870009 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:21.896175 1078428 cri.go:89] found id: ""
	I1210 07:55:21.896197 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.896206 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:21.896212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:21.896272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:21.924596 1078428 cri.go:89] found id: ""
	I1210 07:55:21.924672 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.924684 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:21.924691 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:21.924781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:21.952789 1078428 cri.go:89] found id: ""
	I1210 07:55:21.952811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.952820 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:21.952826 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:21.952885 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:21.978579 1078428 cri.go:89] found id: ""
	I1210 07:55:21.978603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.978611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:21.978617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:21.978678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:22.002801 1078428 cri.go:89] found id: ""
	I1210 07:55:22.002829 1078428 logs.go:282] 0 containers: []
	W1210 07:55:22.002838 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:22.002848 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:22.002866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:22.021034 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:22.021067 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:22.101183 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:22.101208 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:22.101223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:22.133557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:22.133593 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:22.160692 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:22.160719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:24.716616 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:24.727463 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:24.727545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:24.752976 1078428 cri.go:89] found id: ""
	I1210 07:55:24.753005 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.753014 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:24.753021 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:24.753081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:24.780812 1078428 cri.go:89] found id: ""
	I1210 07:55:24.780841 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.780850 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:24.780856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:24.780913 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:24.806877 1078428 cri.go:89] found id: ""
	I1210 07:55:24.806900 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.806909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:24.806915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:24.806979 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:24.836752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.836785 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.836795 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:24.836809 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:24.836876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:24.863110 1078428 cri.go:89] found id: ""
	I1210 07:55:24.863134 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.863143 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:24.863153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:24.863219 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:24.888190 1078428 cri.go:89] found id: ""
	I1210 07:55:24.888214 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.888223 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:24.888230 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:24.888289 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:24.912349 1078428 cri.go:89] found id: ""
	I1210 07:55:24.912383 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.912394 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:24.912400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:24.912462 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:24.937752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.937781 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.937790 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:24.937799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:24.937811 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:24.992892 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:24.992928 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:25.010173 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:25.010241 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:25.099629 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:25.099713 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:25.099746 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:25.131383 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:25.131423 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:27.663351 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:27.674757 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:27.674843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:27.704367 1078428 cri.go:89] found id: ""
	I1210 07:55:27.704400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.704409 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:27.704420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:27.704484 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:27.731740 1078428 cri.go:89] found id: ""
	I1210 07:55:27.731773 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.731783 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:27.731790 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:27.731852 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:27.761848 1078428 cri.go:89] found id: ""
	I1210 07:55:27.761871 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.761880 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:27.761886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:27.761952 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:27.789498 1078428 cri.go:89] found id: ""
	I1210 07:55:27.789527 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.789537 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:27.789543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:27.789603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:27.815293 1078428 cri.go:89] found id: ""
	I1210 07:55:27.815320 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.815335 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:27.815342 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:27.815401 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:27.840211 1078428 cri.go:89] found id: ""
	I1210 07:55:27.840238 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.840249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:27.840256 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:27.840320 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:27.866289 1078428 cri.go:89] found id: ""
	I1210 07:55:27.866313 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.866323 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:27.866329 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:27.866388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:27.892533 1078428 cri.go:89] found id: ""
	I1210 07:55:27.892560 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.892569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:27.892578 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:27.892590 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:27.952019 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:27.952063 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:27.969597 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:27.969631 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:28.035775 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:28.035802 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:28.035816 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:28.064304 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:28.064344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:30.599553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:30.609953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:30.610023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:30.634355 1078428 cri.go:89] found id: ""
	I1210 07:55:30.634384 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.634393 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:30.634400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:30.634460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:30.658396 1078428 cri.go:89] found id: ""
	I1210 07:55:30.658435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.658444 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:30.658450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:30.658540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:30.683976 1078428 cri.go:89] found id: ""
	I1210 07:55:30.684014 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.684023 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:30.684030 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:30.684099 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:30.708278 1078428 cri.go:89] found id: ""
	I1210 07:55:30.708302 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.708311 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:30.708317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:30.708376 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:30.733222 1078428 cri.go:89] found id: ""
	I1210 07:55:30.733253 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.733262 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:30.733269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:30.733368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:30.758588 1078428 cri.go:89] found id: ""
	I1210 07:55:30.758614 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.758623 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:30.758630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:30.758700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:30.783735 1078428 cri.go:89] found id: ""
	I1210 07:55:30.783802 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.783826 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:30.783841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:30.783910 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:30.807833 1078428 cri.go:89] found id: ""
	I1210 07:55:30.807859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.807867 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:30.807876 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:30.807888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:30.872941 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:30.872961 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:30.872975 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:30.899140 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:30.899181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:30.926302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:30.926333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:30.982513 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:30.982550 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.499017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:33.509596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:33.509669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:33.540057 1078428 cri.go:89] found id: ""
	I1210 07:55:33.540082 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.540090 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:33.540097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:33.540160 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:33.570955 1078428 cri.go:89] found id: ""
	I1210 07:55:33.570982 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.570991 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:33.570997 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:33.571056 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:33.605930 1078428 cri.go:89] found id: ""
	I1210 07:55:33.605958 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.605968 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:33.605974 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:33.606036 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:33.634909 1078428 cri.go:89] found id: ""
	I1210 07:55:33.634932 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.634941 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:33.634947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:33.635008 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:33.659844 1078428 cri.go:89] found id: ""
	I1210 07:55:33.659912 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.659927 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:33.659935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:33.659999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:33.684878 1078428 cri.go:89] found id: ""
	I1210 07:55:33.684902 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.684911 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:33.684918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:33.684983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:33.709473 1078428 cri.go:89] found id: ""
	I1210 07:55:33.709496 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.709505 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:33.709517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:33.709580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:33.736059 1078428 cri.go:89] found id: ""
	I1210 07:55:33.736086 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.736095 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:33.736105 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:33.736117 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:33.795512 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:33.795546 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.811254 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:33.811282 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:33.878126 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:33.878148 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:33.878163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:33.904005 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:33.904041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:36.431681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:36.442446 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:36.442546 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:36.466520 1078428 cri.go:89] found id: ""
	I1210 07:55:36.466544 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.466553 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:36.466559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:36.466616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:36.497280 1078428 cri.go:89] found id: ""
	I1210 07:55:36.497307 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.497316 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:36.497322 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:36.497382 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:36.526966 1078428 cri.go:89] found id: ""
	I1210 07:55:36.526988 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.526998 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:36.527003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:36.527067 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:36.566317 1078428 cri.go:89] found id: ""
	I1210 07:55:36.566342 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.566351 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:36.566357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:36.566432 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:36.598673 1078428 cri.go:89] found id: ""
	I1210 07:55:36.598699 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.598716 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:36.598722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:36.598795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:36.638514 1078428 cri.go:89] found id: ""
	I1210 07:55:36.638537 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.638545 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:36.638551 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:36.638621 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:36.663534 1078428 cri.go:89] found id: ""
	I1210 07:55:36.663603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.663623 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:36.663630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:36.663715 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:36.692427 1078428 cri.go:89] found id: ""
	I1210 07:55:36.692451 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.692461 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:36.692471 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:36.692482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:36.717965 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:36.718003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:36.749638 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:36.749668 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:36.806519 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:36.806562 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:36.823288 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:36.823315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:36.888077 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.389725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:39.400775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:39.400867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:39.426362 1078428 cri.go:89] found id: ""
	I1210 07:55:39.426389 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.426398 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:39.426407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:39.426555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:39.455943 1078428 cri.go:89] found id: ""
	I1210 07:55:39.455969 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.455978 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:39.455984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:39.456043 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:39.484097 1078428 cri.go:89] found id: ""
	I1210 07:55:39.484127 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.484142 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:39.484150 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:39.484209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:39.510381 1078428 cri.go:89] found id: ""
	I1210 07:55:39.510408 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.510417 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:39.510423 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:39.510508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:39.534754 1078428 cri.go:89] found id: ""
	I1210 07:55:39.534819 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.534838 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:39.534845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:39.534903 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:39.577369 1078428 cri.go:89] found id: ""
	I1210 07:55:39.577400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.577409 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:39.577416 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:39.577519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:39.607302 1078428 cri.go:89] found id: ""
	I1210 07:55:39.607329 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.607348 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:39.607355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:39.607429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:39.637231 1078428 cri.go:89] found id: ""
	I1210 07:55:39.637270 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.637282 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:39.637292 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:39.637305 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:39.694701 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:39.694745 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:39.711729 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:39.711761 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:39.777959 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.777980 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:39.777995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:39.802829 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:39.802869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:42.336278 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:42.348869 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:42.348958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:42.376684 1078428 cri.go:89] found id: ""
	I1210 07:55:42.376751 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.376766 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:42.376774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:42.376834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:42.401855 1078428 cri.go:89] found id: ""
	I1210 07:55:42.401881 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.401890 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:42.401897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:42.401956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:42.429508 1078428 cri.go:89] found id: ""
	I1210 07:55:42.429532 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.429541 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:42.429547 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:42.429605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:42.453954 1078428 cri.go:89] found id: ""
	I1210 07:55:42.453978 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.453988 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:42.453994 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:42.454052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:42.480307 1078428 cri.go:89] found id: ""
	I1210 07:55:42.480372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.480386 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:42.480393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:42.480465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:42.505157 1078428 cri.go:89] found id: ""
	I1210 07:55:42.505189 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.505198 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:42.505205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:42.505272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:42.530482 1078428 cri.go:89] found id: ""
	I1210 07:55:42.530505 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.530513 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:42.530520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:42.530580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:42.563929 1078428 cri.go:89] found id: ""
	I1210 07:55:42.563996 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.564019 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:42.564041 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:42.564081 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:42.627607 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:42.627645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:42.644032 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:42.644059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:42.709684 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:42.709704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:42.709717 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:42.735150 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:42.735190 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:45.263314 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:45.276890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:45.276965 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:45.320051 1078428 cri.go:89] found id: ""
	I1210 07:55:45.320079 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.320089 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:45.320096 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:45.320155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:45.357108 1078428 cri.go:89] found id: ""
	I1210 07:55:45.357143 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.357153 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:45.357159 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:45.357235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:45.386251 1078428 cri.go:89] found id: ""
	I1210 07:55:45.386281 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.386290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:45.386296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:45.386355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:45.411934 1078428 cri.go:89] found id: ""
	I1210 07:55:45.411960 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.411969 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:45.411975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:45.412034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:45.438194 1078428 cri.go:89] found id: ""
	I1210 07:55:45.438221 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.438236 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:45.438242 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:45.438299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:45.462840 1078428 cri.go:89] found id: ""
	I1210 07:55:45.462864 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.462874 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:45.462880 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:45.462938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:45.487271 1078428 cri.go:89] found id: ""
	I1210 07:55:45.487296 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.487304 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:45.487311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:45.487368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:45.512829 1078428 cri.go:89] found id: ""
	I1210 07:55:45.512859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.512868 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:45.512877 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:45.512888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:45.592088 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:45.592106 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:45.592119 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:45.625233 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:45.625268 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:45.653443 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:45.653475 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:45.708240 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:45.708280 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.225757 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:48.236296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:48.236369 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:48.261289 1078428 cri.go:89] found id: ""
	I1210 07:55:48.261312 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.261320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:48.261337 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:48.261400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:48.286722 1078428 cri.go:89] found id: ""
	I1210 07:55:48.286746 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.286755 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:48.286761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:48.286819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:48.322426 1078428 cri.go:89] found id: ""
	I1210 07:55:48.322453 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.322484 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:48.322507 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:48.322588 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:48.351023 1078428 cri.go:89] found id: ""
	I1210 07:55:48.351052 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.351062 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:48.351068 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:48.351126 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:48.378519 1078428 cri.go:89] found id: ""
	I1210 07:55:48.378542 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.378550 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:48.378556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:48.378616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:48.403355 1078428 cri.go:89] found id: ""
	I1210 07:55:48.403382 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.403392 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:48.403398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:48.403478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:48.427960 1078428 cri.go:89] found id: ""
	I1210 07:55:48.427986 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.427995 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:48.428001 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:48.428059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:48.451603 1078428 cri.go:89] found id: ""
	I1210 07:55:48.451670 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.451696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:48.451714 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:48.451727 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:48.506052 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:48.506088 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.523423 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:48.523453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:48.594581 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:48.594606 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:48.594619 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:48.622945 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:48.622982 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:51.154448 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:51.165850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:51.165926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:51.191582 1078428 cri.go:89] found id: ""
	I1210 07:55:51.191607 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.191615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:51.191622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:51.191681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:51.216289 1078428 cri.go:89] found id: ""
	I1210 07:55:51.216314 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.216324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:51.216331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:51.216390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:51.245299 1078428 cri.go:89] found id: ""
	I1210 07:55:51.245324 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.245333 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:51.245339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:51.245400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:51.269348 1078428 cri.go:89] found id: ""
	I1210 07:55:51.269372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.269380 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:51.269387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:51.269443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:51.296327 1078428 cri.go:89] found id: ""
	I1210 07:55:51.296350 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.296360 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:51.296367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:51.296433 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:51.326976 1078428 cri.go:89] found id: ""
	I1210 07:55:51.326997 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.327005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:51.327011 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:51.327069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:51.360781 1078428 cri.go:89] found id: ""
	I1210 07:55:51.360857 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.360873 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:51.360881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:51.360960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:51.384754 1078428 cri.go:89] found id: ""
	I1210 07:55:51.384779 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.384788 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:51.384799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:51.384810 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:51.443446 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:51.443483 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:51.461527 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:51.461559 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:51.529060 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:51.529096 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:51.529109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:51.561037 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:51.561354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:54.111711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:54.122707 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:54.122781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:54.152821 1078428 cri.go:89] found id: ""
	I1210 07:55:54.152853 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.152867 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:54.152878 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:54.152961 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:54.180559 1078428 cri.go:89] found id: ""
	I1210 07:55:54.180583 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.180591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:54.180598 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:54.180662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:54.208251 1078428 cri.go:89] found id: ""
	I1210 07:55:54.208276 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.208285 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:54.208292 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:54.208349 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:54.233630 1078428 cri.go:89] found id: ""
	I1210 07:55:54.233655 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.233664 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:54.233670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:54.233727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:54.258409 1078428 cri.go:89] found id: ""
	I1210 07:55:54.258435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.258443 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:54.258450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:54.258533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:54.282200 1078428 cri.go:89] found id: ""
	I1210 07:55:54.282234 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.282242 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:54.282248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:54.282306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:54.326329 1078428 cri.go:89] found id: ""
	I1210 07:55:54.326352 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.326361 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:54.326367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:54.326428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:54.353371 1078428 cri.go:89] found id: ""
	I1210 07:55:54.353396 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.353405 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:54.353415 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:54.353429 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:54.412987 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:54.413025 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:54.429633 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:54.429718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:54.497491 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:54.497530 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:54.497544 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:54.523210 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:54.523247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.066626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:57.077561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:57.077642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:57.102249 1078428 cri.go:89] found id: ""
	I1210 07:55:57.102273 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.102282 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:57.102289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:57.102352 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:57.126387 1078428 cri.go:89] found id: ""
	I1210 07:55:57.126413 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.126421 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:57.126427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:57.126506 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:57.151315 1078428 cri.go:89] found id: ""
	I1210 07:55:57.151341 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.151351 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:57.151357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:57.151417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:57.180045 1078428 cri.go:89] found id: ""
	I1210 07:55:57.180074 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.180083 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:57.180090 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:57.180150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:57.205199 1078428 cri.go:89] found id: ""
	I1210 07:55:57.205225 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.205233 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:57.205240 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:57.205299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:57.233971 1078428 cri.go:89] found id: ""
	I1210 07:55:57.233999 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.234009 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:57.234015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:57.234078 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:57.258568 1078428 cri.go:89] found id: ""
	I1210 07:55:57.258594 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.258604 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:57.258610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:57.258668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:57.282764 1078428 cri.go:89] found id: ""
	I1210 07:55:57.282790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.282800 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:57.282810 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:57.282823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:57.299427 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:57.299453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:57.374740 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:57.374810 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:57.374851 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:57.400786 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:57.400822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.427735 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:57.427767 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:59.984110 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:59.994599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:59.994677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:00.044693 1078428 cri.go:89] found id: ""
	I1210 07:56:00.044863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.044893 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:00.044928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:00.045024 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:00.118046 1078428 cri.go:89] found id: ""
	I1210 07:56:00.118124 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.118150 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:00.118171 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:00.119167 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:00.182111 1078428 cri.go:89] found id: ""
	I1210 07:56:00.182136 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.182145 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:00.182152 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:00.182960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:00.239971 1078428 cri.go:89] found id: ""
	I1210 07:56:00.239996 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.240006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:00.240013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:00.240085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:00.287888 1078428 cri.go:89] found id: ""
	I1210 07:56:00.287927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.287937 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:00.287945 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:00.288014 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:00.352509 1078428 cri.go:89] found id: ""
	I1210 07:56:00.352556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.352566 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:00.352593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:00.352712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:00.421383 1078428 cri.go:89] found id: ""
	I1210 07:56:00.421421 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.421430 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:00.421437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:00.421521 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:00.456737 1078428 cri.go:89] found id: ""
	I1210 07:56:00.456766 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.456776 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:00.456786 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:00.456803 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:00.539348 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:00.539370 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:00.539385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:00.569574 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:00.569616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:00.613655 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:00.613680 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:00.671124 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:00.671163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.187739 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:03.198133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:03.198208 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:03.223791 1078428 cri.go:89] found id: ""
	I1210 07:56:03.223818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.223828 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:03.223834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:03.223894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:03.248620 1078428 cri.go:89] found id: ""
	I1210 07:56:03.248644 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.248653 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:03.248659 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:03.248720 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:03.273951 1078428 cri.go:89] found id: ""
	I1210 07:56:03.273975 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.273985 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:03.273991 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:03.274053 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:03.300277 1078428 cri.go:89] found id: ""
	I1210 07:56:03.300300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.300309 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:03.300315 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:03.300372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:03.332941 1078428 cri.go:89] found id: ""
	I1210 07:56:03.332967 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.332977 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:03.332983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:03.333038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:03.367066 1078428 cri.go:89] found id: ""
	I1210 07:56:03.367091 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.367100 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:03.367106 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:03.367164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:03.391075 1078428 cri.go:89] found id: ""
	I1210 07:56:03.391098 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.391106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:03.391112 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:03.391170 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:03.415021 1078428 cri.go:89] found id: ""
	I1210 07:56:03.415049 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.415058 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:03.415068 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:03.415079 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:03.440424 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:03.440470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:03.468290 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:03.468319 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:03.525567 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:03.525601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.541470 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:03.541505 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:03.626098 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:06.126647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:06.137759 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:06.137831 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:06.163154 1078428 cri.go:89] found id: ""
	I1210 07:56:06.163181 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.163191 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:06.163198 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:06.163265 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:06.192495 1078428 cri.go:89] found id: ""
	I1210 07:56:06.192521 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.192530 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:06.192536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:06.192615 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:06.220976 1078428 cri.go:89] found id: ""
	I1210 07:56:06.221009 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.221017 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:06.221025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:06.221134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:06.246400 1078428 cri.go:89] found id: ""
	I1210 07:56:06.246427 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.246436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:06.246442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:06.246523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:06.272644 1078428 cri.go:89] found id: ""
	I1210 07:56:06.272667 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.272675 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:06.272681 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:06.272738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:06.300567 1078428 cri.go:89] found id: ""
	I1210 07:56:06.300636 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.300648 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:06.300655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:06.300726 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:06.332683 1078428 cri.go:89] found id: ""
	I1210 07:56:06.332750 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.332773 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:06.332795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:06.332881 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:06.366018 1078428 cri.go:89] found id: ""
	I1210 07:56:06.366099 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.366124 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:06.366149 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:06.366177 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:06.422922 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:06.422958 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:06.439199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:06.439231 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:06.512644 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:06.512669 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:06.512682 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:06.537590 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:06.537625 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:09.085608 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:09.095930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:09.096006 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:09.119422 1078428 cri.go:89] found id: ""
	I1210 07:56:09.119445 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.119454 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:09.119460 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:09.119518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:09.145193 1078428 cri.go:89] found id: ""
	I1210 07:56:09.145220 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.145230 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:09.145236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:09.145296 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:09.170538 1078428 cri.go:89] found id: ""
	I1210 07:56:09.170567 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.170576 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:09.170582 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:09.170640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:09.199713 1078428 cri.go:89] found id: ""
	I1210 07:56:09.199741 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.199749 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:09.199756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:09.199815 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:09.224005 1078428 cri.go:89] found id: ""
	I1210 07:56:09.224037 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.224046 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:09.224053 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:09.224112 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:09.254251 1078428 cri.go:89] found id: ""
	I1210 07:56:09.254273 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.254283 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:09.254290 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:09.254348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:09.280458 1078428 cri.go:89] found id: ""
	I1210 07:56:09.280484 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.280493 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:09.280500 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:09.280565 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:09.320912 1078428 cri.go:89] found id: ""
	I1210 07:56:09.320943 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.320952 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:09.320961 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:09.320974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:09.386817 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:09.386854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:09.402878 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:09.402954 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:09.472013 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:09.472092 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:09.472114 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:09.497983 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:09.498020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.030207 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:12.040966 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:12.041087 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:12.069314 1078428 cri.go:89] found id: ""
	I1210 07:56:12.069346 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.069356 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:12.069362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:12.069424 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:12.096321 1078428 cri.go:89] found id: ""
	I1210 07:56:12.096400 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.096423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:12.096438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:12.096519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:12.122859 1078428 cri.go:89] found id: ""
	I1210 07:56:12.122887 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.122896 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:12.122903 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:12.122985 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:12.148481 1078428 cri.go:89] found id: ""
	I1210 07:56:12.148505 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.148514 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:12.148520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:12.148633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:12.172954 1078428 cri.go:89] found id: ""
	I1210 07:56:12.172978 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.172995 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:12.173003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:12.173063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:12.198414 1078428 cri.go:89] found id: ""
	I1210 07:56:12.198436 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.198446 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:12.198453 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:12.198530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:12.227549 1078428 cri.go:89] found id: ""
	I1210 07:56:12.227576 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.227586 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:12.227592 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:12.227651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:12.255277 1078428 cri.go:89] found id: ""
	I1210 07:56:12.255300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.255309 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:12.255318 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:12.255330 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:12.343072 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:12.343095 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:12.343109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:12.370845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:12.370884 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.401190 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:12.401217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:12.456146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:12.456181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:14.972152 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:14.983046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:14.983121 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:15.031099 1078428 cri.go:89] found id: ""
	I1210 07:56:15.031183 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.031217 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:15.031260 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:15.031373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:15.061619 1078428 cri.go:89] found id: ""
	I1210 07:56:15.061646 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.061655 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:15.061662 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:15.061728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:15.088678 1078428 cri.go:89] found id: ""
	I1210 07:56:15.088701 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.088709 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:15.088716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:15.088781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:15.118776 1078428 cri.go:89] found id: ""
	I1210 07:56:15.118854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.118872 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:15.118881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:15.118945 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:15.144691 1078428 cri.go:89] found id: ""
	I1210 07:56:15.144717 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.144727 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:15.144734 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:15.144799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:15.169827 1078428 cri.go:89] found id: ""
	I1210 07:56:15.169854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.169863 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:15.169870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:15.169927 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:15.196425 1078428 cri.go:89] found id: ""
	I1210 07:56:15.196459 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.196468 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:15.196474 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:15.196533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:15.221736 1078428 cri.go:89] found id: ""
	I1210 07:56:15.221763 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.221772 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:15.221782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:15.221794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:15.237860 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:15.237890 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:15.309823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:15.309847 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:15.309860 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:15.342939 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:15.342990 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:15.376812 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:15.376839 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:17.934235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:17.945317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:17.945396 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:17.971659 1078428 cri.go:89] found id: ""
	I1210 07:56:17.971685 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.971694 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:17.971700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:17.971753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:17.996434 1078428 cri.go:89] found id: ""
	I1210 07:56:17.996476 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.996488 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:17.996495 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:17.996560 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:18.024303 1078428 cri.go:89] found id: ""
	I1210 07:56:18.024338 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.024347 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:18.024354 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:18.024416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:18.049317 1078428 cri.go:89] found id: ""
	I1210 07:56:18.049344 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.049353 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:18.049360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:18.049421 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:18.079586 1078428 cri.go:89] found id: ""
	I1210 07:56:18.079611 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.079620 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:18.079627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:18.079686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:18.108486 1078428 cri.go:89] found id: ""
	I1210 07:56:18.108511 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.108519 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:18.108526 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:18.108601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:18.137645 1078428 cri.go:89] found id: ""
	I1210 07:56:18.137671 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.137680 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:18.137686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:18.137767 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:18.161838 1078428 cri.go:89] found id: ""
	I1210 07:56:18.161863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.161874 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:18.161883 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:18.161916 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:18.235505 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:18.235526 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:18.235539 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:18.260551 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:18.260589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:18.288267 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:18.288296 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:18.349132 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:18.349215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:20.868569 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:20.879574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:20.879649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:20.904201 1078428 cri.go:89] found id: ""
	I1210 07:56:20.904226 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.904235 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:20.904241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:20.904299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:20.929396 1078428 cri.go:89] found id: ""
	I1210 07:56:20.929423 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.929432 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:20.929439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:20.929514 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:20.954953 1078428 cri.go:89] found id: ""
	I1210 07:56:20.954984 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.954993 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:20.954999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:20.955058 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:20.978741 1078428 cri.go:89] found id: ""
	I1210 07:56:20.978767 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.978776 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:20.978782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:20.978841 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:21.003286 1078428 cri.go:89] found id: ""
	I1210 07:56:21.003313 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.003323 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:21.003330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:21.003402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:21.034505 1078428 cri.go:89] found id: ""
	I1210 07:56:21.034527 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.034536 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:21.034543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:21.034605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:21.058861 1078428 cri.go:89] found id: ""
	I1210 07:56:21.058885 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.058894 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:21.058900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:21.058958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:21.082740 1078428 cri.go:89] found id: ""
	I1210 07:56:21.082764 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.082773 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:21.082782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:21.082794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:21.098247 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:21.098276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:21.161962 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:21.161982 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:21.161995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:21.187272 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:21.187314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:21.214180 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:21.214213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:23.769450 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:23.780372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:23.780505 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:23.817607 1078428 cri.go:89] found id: ""
	I1210 07:56:23.817631 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.817641 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:23.817648 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:23.817709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:23.848903 1078428 cri.go:89] found id: ""
	I1210 07:56:23.848927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.848949 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:23.848960 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:23.849023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:23.877281 1078428 cri.go:89] found id: ""
	I1210 07:56:23.877305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.877314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:23.877320 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:23.877387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:23.903972 1078428 cri.go:89] found id: ""
	I1210 07:56:23.903997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.904006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:23.904013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:23.904089 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:23.929481 1078428 cri.go:89] found id: ""
	I1210 07:56:23.929508 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.929517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:23.929525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:23.929586 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:23.954626 1078428 cri.go:89] found id: ""
	I1210 07:56:23.954665 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.954676 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:23.954683 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:23.954785 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:23.980069 1078428 cri.go:89] found id: ""
	I1210 07:56:23.980102 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.980111 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:23.980117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:23.980176 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:24.005963 1078428 cri.go:89] found id: ""
	I1210 07:56:24.005987 1078428 logs.go:282] 0 containers: []
	W1210 07:56:24.005996 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:24.006006 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:24.006017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:24.036028 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:24.036065 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:24.065541 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:24.065571 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:24.126584 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:24.126630 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:24.143358 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:24.143391 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:24.208974 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:26.710619 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:26.721267 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:26.721343 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:26.746073 1078428 cri.go:89] found id: ""
	I1210 07:56:26.746100 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.746109 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:26.746115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:26.746178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:26.772432 1078428 cri.go:89] found id: ""
	I1210 07:56:26.772456 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.772472 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:26.772479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:26.772538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:26.809928 1078428 cri.go:89] found id: ""
	I1210 07:56:26.809954 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.809964 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:26.809970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:26.810026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:26.837500 1078428 cri.go:89] found id: ""
	I1210 07:56:26.837522 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.837531 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:26.837538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:26.837592 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:26.864667 1078428 cri.go:89] found id: ""
	I1210 07:56:26.864693 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.864702 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:26.864708 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:26.864768 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:26.892330 1078428 cri.go:89] found id: ""
	I1210 07:56:26.892359 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.892368 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:26.892374 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:26.892457 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:26.916781 1078428 cri.go:89] found id: ""
	I1210 07:56:26.916807 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.916815 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:26.916822 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:26.916902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:26.945103 1078428 cri.go:89] found id: ""
	I1210 07:56:26.945128 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.945137 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:26.945147 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:26.945178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:27.001893 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:27.001933 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:27.020119 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:27.020149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:27.092626 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:27.092690 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:27.092712 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:27.118838 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:27.118873 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:29.646997 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:29.659058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:29.659139 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:29.684417 1078428 cri.go:89] found id: ""
	I1210 07:56:29.684442 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.684452 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:29.684459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:29.684532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:29.713716 1078428 cri.go:89] found id: ""
	I1210 07:56:29.713747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.713756 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:29.713762 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:29.713829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:29.742671 1078428 cri.go:89] found id: ""
	I1210 07:56:29.742747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.742761 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:29.742769 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:29.742834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:29.767461 1078428 cri.go:89] found id: ""
	I1210 07:56:29.767488 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.767497 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:29.767503 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:29.767590 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:29.791629 1078428 cri.go:89] found id: ""
	I1210 07:56:29.791655 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.791664 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:29.791670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:29.791728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:29.822213 1078428 cri.go:89] found id: ""
	I1210 07:56:29.822240 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.822249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:29.822255 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:29.822317 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:29.854606 1078428 cri.go:89] found id: ""
	I1210 07:56:29.854633 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.854643 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:29.854649 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:29.854709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:29.880033 1078428 cri.go:89] found id: ""
	I1210 07:56:29.880059 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.880068 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:29.880077 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:29.880090 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:29.948475 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:29.948498 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:29.948512 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:29.974136 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:29.974171 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:30.013967 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:30.014008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:30.097748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:30.097788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.617610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:32.628661 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:32.628735 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:32.652564 1078428 cri.go:89] found id: ""
	I1210 07:56:32.652594 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.652603 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:32.652610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:32.652668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:32.680277 1078428 cri.go:89] found id: ""
	I1210 07:56:32.680302 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.680310 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:32.680317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:32.680379 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:32.704183 1078428 cri.go:89] found id: ""
	I1210 07:56:32.704207 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.704216 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:32.704222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:32.704285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:32.729141 1078428 cri.go:89] found id: ""
	I1210 07:56:32.729165 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.729174 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:32.729180 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:32.729237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:32.753460 1078428 cri.go:89] found id: ""
	I1210 07:56:32.753482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.753490 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:32.753496 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:32.753562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:32.781036 1078428 cri.go:89] found id: ""
	I1210 07:56:32.781061 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.781069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:32.781076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:32.781131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:32.816565 1078428 cri.go:89] found id: ""
	I1210 07:56:32.816586 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.816594 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:32.816599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:32.816655 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:32.848807 1078428 cri.go:89] found id: ""
	I1210 07:56:32.848832 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.848841 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:32.848849 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:32.848861 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:32.908343 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:32.908379 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.924367 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:32.924396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:32.994542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:32.994565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:32.994581 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:33.024802 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:33.024842 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:35.557491 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:35.568723 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:35.568795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:35.601157 1078428 cri.go:89] found id: ""
	I1210 07:56:35.601184 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.601193 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:35.601200 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:35.601260 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:35.628459 1078428 cri.go:89] found id: ""
	I1210 07:56:35.628494 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.628503 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:35.628509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:35.628570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:35.656310 1078428 cri.go:89] found id: ""
	I1210 07:56:35.656332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.656342 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:35.656348 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:35.656404 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:35.680954 1078428 cri.go:89] found id: ""
	I1210 07:56:35.680980 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.680992 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:35.680998 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:35.681055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:35.708548 1078428 cri.go:89] found id: ""
	I1210 07:56:35.708575 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.708584 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:35.708590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:35.708648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:35.736013 1078428 cri.go:89] found id: ""
	I1210 07:56:35.736040 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.736049 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:35.736056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:35.736124 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:35.760465 1078428 cri.go:89] found id: ""
	I1210 07:56:35.760495 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.760504 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:35.760511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:35.760574 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:35.785429 1078428 cri.go:89] found id: ""
	I1210 07:56:35.785451 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.785460 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:35.785469 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:35.785481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:35.871280 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:35.871302 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:35.871315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:35.897087 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:35.897124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:35.925107 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:35.925134 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:35.981188 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:35.981270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.499048 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:38.509835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:38.509908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:38.534615 1078428 cri.go:89] found id: ""
	I1210 07:56:38.534637 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.534645 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:38.534652 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:38.534708 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:38.576309 1078428 cri.go:89] found id: ""
	I1210 07:56:38.576332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.576341 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:38.576347 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:38.576407 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:38.611259 1078428 cri.go:89] found id: ""
	I1210 07:56:38.611281 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.611290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:38.611297 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:38.611357 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:38.637583 1078428 cri.go:89] found id: ""
	I1210 07:56:38.637612 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.637621 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:38.637627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:38.637686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:38.662187 1078428 cri.go:89] found id: ""
	I1210 07:56:38.662267 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.662290 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:38.662310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:38.662402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:38.686838 1078428 cri.go:89] found id: ""
	I1210 07:56:38.686861 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.686869 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:38.686876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:38.686933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:38.710788 1078428 cri.go:89] found id: ""
	I1210 07:56:38.710815 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.710824 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:38.710831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:38.710930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:38.736531 1078428 cri.go:89] found id: ""
	I1210 07:56:38.736556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.736565 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:38.736575 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:38.736589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.752335 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:38.752364 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:38.826607 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:38.826675 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:38.826688 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:38.854204 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:38.854240 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:38.883619 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:38.883647 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:41.439316 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:41.450451 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:41.450532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:41.476998 1078428 cri.go:89] found id: ""
	I1210 07:56:41.477022 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.477030 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:41.477036 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:41.477096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:41.502043 1078428 cri.go:89] found id: ""
	I1210 07:56:41.502069 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.502078 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:41.502084 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:41.502145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:41.526905 1078428 cri.go:89] found id: ""
	I1210 07:56:41.526931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.526940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:41.526947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:41.527007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:41.558750 1078428 cri.go:89] found id: ""
	I1210 07:56:41.558779 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.558788 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:41.558795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:41.558851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:41.596637 1078428 cri.go:89] found id: ""
	I1210 07:56:41.596664 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.596674 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:41.596680 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:41.596742 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:41.622316 1078428 cri.go:89] found id: ""
	I1210 07:56:41.622340 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.622348 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:41.622355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:41.622418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:41.648410 1078428 cri.go:89] found id: ""
	I1210 07:56:41.648482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.648511 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:41.648518 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:41.648581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:41.680776 1078428 cri.go:89] found id: ""
	I1210 07:56:41.680802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.680811 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:41.680820 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:41.680832 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:41.708185 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:41.708211 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:41.767625 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:41.767662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:41.784949 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:41.784980 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:41.871610 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:41.871632 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:41.871645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.398611 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:44.408733 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:44.408806 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:44.432507 1078428 cri.go:89] found id: ""
	I1210 07:56:44.432531 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.432540 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:44.432546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:44.432607 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:44.457597 1078428 cri.go:89] found id: ""
	I1210 07:56:44.457622 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.457631 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:44.457637 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:44.457697 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:44.485123 1078428 cri.go:89] found id: ""
	I1210 07:56:44.485149 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.485158 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:44.485165 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:44.485228 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:44.510813 1078428 cri.go:89] found id: ""
	I1210 07:56:44.510848 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.510857 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:44.510870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:44.510929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:44.534504 1078428 cri.go:89] found id: ""
	I1210 07:56:44.534528 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.534537 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:44.534543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:44.534600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:44.574866 1078428 cri.go:89] found id: ""
	I1210 07:56:44.574940 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.574962 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:44.574983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:44.575074 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:44.605450 1078428 cri.go:89] found id: ""
	I1210 07:56:44.605523 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.605546 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:44.605566 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:44.605652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:44.633965 1078428 cri.go:89] found id: ""
	I1210 07:56:44.634039 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.634064 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:44.634087 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:44.634124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:44.692591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:44.692628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:44.708687 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:44.708718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:44.774532 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:44.774581 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:44.774594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.801145 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:44.801235 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.336116 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:47.346722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:47.346793 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:47.370822 1078428 cri.go:89] found id: ""
	I1210 07:56:47.370860 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.370870 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:47.370876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:47.370948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:47.401111 1078428 cri.go:89] found id: ""
	I1210 07:56:47.401140 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.401149 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:47.401155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:47.401212 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:47.430968 1078428 cri.go:89] found id: ""
	I1210 07:56:47.430991 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.430999 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:47.431004 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:47.431063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:47.455626 1078428 cri.go:89] found id: ""
	I1210 07:56:47.455650 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.455659 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:47.455665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:47.455722 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:47.479857 1078428 cri.go:89] found id: ""
	I1210 07:56:47.479882 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.479890 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:47.479896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:47.479959 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:47.504271 1078428 cri.go:89] found id: ""
	I1210 07:56:47.504294 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.504305 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:47.504312 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:47.504373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:47.532761 1078428 cri.go:89] found id: ""
	I1210 07:56:47.532837 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.532863 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:47.532886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:47.532990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:47.570086 1078428 cri.go:89] found id: ""
	I1210 07:56:47.570108 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.570116 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:47.570125 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:47.570137 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:47.586049 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:47.586078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:47.655434 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:47.655455 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:47.655470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:47.680757 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:47.680794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.708957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:47.708986 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:50.265598 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:50.276268 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:50.276342 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:50.301484 1078428 cri.go:89] found id: ""
	I1210 07:56:50.301507 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.301515 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:50.301521 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:50.301582 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:50.327230 1078428 cri.go:89] found id: ""
	I1210 07:56:50.327255 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.327264 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:50.327270 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:50.327331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:50.352201 1078428 cri.go:89] found id: ""
	I1210 07:56:50.352224 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.352233 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:50.352239 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:50.352299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:50.377546 1078428 cri.go:89] found id: ""
	I1210 07:56:50.377571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.377580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:50.377586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:50.377647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:50.403517 1078428 cri.go:89] found id: ""
	I1210 07:56:50.403544 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.403552 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:50.403559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:50.403635 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:50.432794 1078428 cri.go:89] found id: ""
	I1210 07:56:50.432820 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.432829 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:50.432835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:50.432924 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:50.456905 1078428 cri.go:89] found id: ""
	I1210 07:56:50.456931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.456941 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:50.456947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:50.457013 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:50.488840 1078428 cri.go:89] found id: ""
	I1210 07:56:50.488908 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.488932 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:50.488949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:50.488962 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:50.547966 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:50.548000 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:50.565711 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:50.565789 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:50.652776 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:50.652800 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:50.652815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:50.678909 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:50.678950 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.207825 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:53.218403 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:53.218500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:53.244529 1078428 cri.go:89] found id: ""
	I1210 07:56:53.244556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.244565 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:53.244572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:53.244629 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:53.270382 1078428 cri.go:89] found id: ""
	I1210 07:56:53.270408 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.270418 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:53.270424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:53.270517 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:53.295316 1078428 cri.go:89] found id: ""
	I1210 07:56:53.295342 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.295352 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:53.295358 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:53.295425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:53.324326 1078428 cri.go:89] found id: ""
	I1210 07:56:53.324351 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.324360 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:53.324367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:53.324444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:53.349399 1078428 cri.go:89] found id: ""
	I1210 07:56:53.349425 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.349435 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:53.349441 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:53.349555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:53.374280 1078428 cri.go:89] found id: ""
	I1210 07:56:53.374305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.374314 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:53.374321 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:53.374431 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:53.398894 1078428 cri.go:89] found id: ""
	I1210 07:56:53.398920 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.398929 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:53.398935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:53.398992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:53.423872 1078428 cri.go:89] found id: ""
	I1210 07:56:53.423897 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.423907 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:53.423920 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:53.423936 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:53.440226 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:53.440258 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:53.503949 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:53.503975 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:53.503989 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:53.530691 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:53.530737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.577761 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:53.577835 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:56.142597 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:56.153164 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:56.153234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:56.177358 1078428 cri.go:89] found id: ""
	I1210 07:56:56.177391 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.177400 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:56.177406 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:56.177475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:56.202573 1078428 cri.go:89] found id: ""
	I1210 07:56:56.202641 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.202657 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:56.202664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:56.202725 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:56.226758 1078428 cri.go:89] found id: ""
	I1210 07:56:56.226785 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.226795 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:56.226802 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:56.226891 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:56.250286 1078428 cri.go:89] found id: ""
	I1210 07:56:56.250310 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.250319 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:56.250327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:56.250381 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:56.276297 1078428 cri.go:89] found id: ""
	I1210 07:56:56.276375 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.276391 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:56.276398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:56.276458 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:56.301334 1078428 cri.go:89] found id: ""
	I1210 07:56:56.301366 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.301375 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:56.301382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:56.301450 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:56.325521 1078428 cri.go:89] found id: ""
	I1210 07:56:56.325557 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.325566 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:56.325572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:56.325640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:56.351180 1078428 cri.go:89] found id: ""
	I1210 07:56:56.351219 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.351228 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:56.351237 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:56.351249 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:56.406556 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:56.406592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:56.422756 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:56.422788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:56.486945 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:56.486967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:56.486983 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:56.512575 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:56.512616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:59.046618 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:59.059092 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:59.059161 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:59.089542 1078428 cri.go:89] found id: ""
	I1210 07:56:59.089571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.089580 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:59.089586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:59.089648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:59.118669 1078428 cri.go:89] found id: ""
	I1210 07:56:59.118691 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.118700 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:59.118706 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:59.118770 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:59.143775 1078428 cri.go:89] found id: ""
	I1210 07:56:59.143802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.143814 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:59.143821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:59.143880 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:59.167972 1078428 cri.go:89] found id: ""
	I1210 07:56:59.167997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.168006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:59.168012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:59.168088 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:59.195291 1078428 cri.go:89] found id: ""
	I1210 07:56:59.195316 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.195325 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:59.195331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:59.195434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:59.219900 1078428 cri.go:89] found id: ""
	I1210 07:56:59.219928 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.219937 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:59.219943 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:59.220002 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:59.252792 1078428 cri.go:89] found id: ""
	I1210 07:56:59.252818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.252827 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:59.252834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:59.252894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:59.281785 1078428 cri.go:89] found id: ""
	I1210 07:56:59.281808 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.281823 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:59.281832 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:59.281843 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:59.337457 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:59.337496 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:59.353622 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:59.353650 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:59.423704 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:59.423725 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:59.423739 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:59.449814 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:59.449853 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:01.979246 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:01.990999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:01.991072 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:02.022990 1078428 cri.go:89] found id: ""
	I1210 07:57:02.023028 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.023038 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:02.023046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:02.023109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:02.050830 1078428 cri.go:89] found id: ""
	I1210 07:57:02.050857 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.050867 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:02.050873 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:02.050930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:02.080878 1078428 cri.go:89] found id: ""
	I1210 07:57:02.080901 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.080909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:02.080915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:02.080974 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:02.111744 1078428 cri.go:89] found id: ""
	I1210 07:57:02.111766 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.111774 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:02.111780 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:02.111838 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:02.139560 1078428 cri.go:89] found id: ""
	I1210 07:57:02.139587 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.139596 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:02.139602 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:02.139662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:02.164249 1078428 cri.go:89] found id: ""
	I1210 07:57:02.164274 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.164282 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:02.164289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:02.164347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:02.191165 1078428 cri.go:89] found id: ""
	I1210 07:57:02.191187 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.191196 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:02.191202 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:02.191280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:02.220305 1078428 cri.go:89] found id: ""
	I1210 07:57:02.220371 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.220395 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:02.220419 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:02.220447 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:02.275451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:02.275490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:02.291722 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:02.291797 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:02.357294 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:02.357319 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:02.357333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:02.382557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:02.382591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:04.913285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:04.924140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:04.924214 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:04.949752 1078428 cri.go:89] found id: ""
	I1210 07:57:04.949787 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.949796 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:04.949803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:04.949869 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:04.974850 1078428 cri.go:89] found id: ""
	I1210 07:57:04.974876 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.974886 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:04.974892 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:04.974949 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:04.999787 1078428 cri.go:89] found id: ""
	I1210 07:57:04.999853 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.999868 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:04.999875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:04.999937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:05.031544 1078428 cri.go:89] found id: ""
	I1210 07:57:05.031570 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.031580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:05.031586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:05.031644 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:05.068235 1078428 cri.go:89] found id: ""
	I1210 07:57:05.068262 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.068272 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:05.068278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:05.068337 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:05.101435 1078428 cri.go:89] found id: ""
	I1210 07:57:05.101462 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.101472 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:05.101479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:05.101545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:05.129616 1078428 cri.go:89] found id: ""
	I1210 07:57:05.129640 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.129648 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:05.129654 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:05.129733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:05.155520 1078428 cri.go:89] found id: ""
	I1210 07:57:05.155544 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.155553 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:05.155563 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:05.155575 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:05.212400 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:05.212436 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:05.228606 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:05.228643 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:05.292822 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:05.292845 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:05.292858 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:05.318694 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:05.318732 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:07.846610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:07.857861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:07.857939 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:07.885093 1078428 cri.go:89] found id: ""
	I1210 07:57:07.885115 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.885124 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:07.885130 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:07.885192 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:07.909018 1078428 cri.go:89] found id: ""
	I1210 07:57:07.909043 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.909052 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:07.909058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:07.909116 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:07.935262 1078428 cri.go:89] found id: ""
	I1210 07:57:07.935288 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.935298 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:07.935303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:07.935366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:07.959939 1078428 cri.go:89] found id: ""
	I1210 07:57:07.959965 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.959974 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:07.959981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:07.960039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:07.991314 1078428 cri.go:89] found id: ""
	I1210 07:57:07.991341 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.991350 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:07.991356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:07.991415 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:08.020601 1078428 cri.go:89] found id: ""
	I1210 07:57:08.020628 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.020638 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:08.020645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:08.020709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:08.049221 1078428 cri.go:89] found id: ""
	I1210 07:57:08.049250 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.049259 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:08.049265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:08.049323 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:08.078839 1078428 cri.go:89] found id: ""
	I1210 07:57:08.078862 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.078870 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:08.078883 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:08.078896 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:08.098811 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:08.098888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:08.168958 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:08.169024 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:08.169046 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:08.195261 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:08.195297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:08.222093 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:08.222121 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:10.778721 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:10.791524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:10.791597 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:10.819485 1078428 cri.go:89] found id: ""
	I1210 07:57:10.819507 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.819519 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:10.819525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:10.819585 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:10.872623 1078428 cri.go:89] found id: ""
	I1210 07:57:10.872646 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.872654 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:10.872660 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:10.872724 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:10.898357 1078428 cri.go:89] found id: ""
	I1210 07:57:10.898378 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.898387 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:10.898393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:10.898448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:10.923976 1078428 cri.go:89] found id: ""
	I1210 07:57:10.924000 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.924009 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:10.924016 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:10.924095 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:10.952951 1078428 cri.go:89] found id: ""
	I1210 07:57:10.952986 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.952996 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:10.953002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:10.953069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:10.977761 1078428 cri.go:89] found id: ""
	I1210 07:57:10.977793 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.977802 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:10.977808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:10.977878 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:11.009022 1078428 cri.go:89] found id: ""
	I1210 07:57:11.009052 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.009069 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:11.009076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:11.009147 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:11.034444 1078428 cri.go:89] found id: ""
	I1210 07:57:11.034493 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.034502 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:11.034512 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:11.034523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:11.098059 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:11.098096 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:11.117339 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:11.117370 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:11.190897 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:11.190919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:11.190932 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:11.215685 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:11.215722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:13.744333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:13.754962 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:13.755031 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:13.783588 1078428 cri.go:89] found id: ""
	I1210 07:57:13.783611 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.783619 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:13.783625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:13.783683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:13.819100 1078428 cri.go:89] found id: ""
	I1210 07:57:13.819122 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.819130 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:13.819136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:13.819193 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:13.860234 1078428 cri.go:89] found id: ""
	I1210 07:57:13.860257 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.860266 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:13.860272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:13.860332 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:13.886331 1078428 cri.go:89] found id: ""
	I1210 07:57:13.886406 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.886418 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:13.886424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:13.886540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:13.911054 1078428 cri.go:89] found id: ""
	I1210 07:57:13.911080 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.911089 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:13.911097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:13.911172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:13.934983 1078428 cri.go:89] found id: ""
	I1210 07:57:13.935051 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.935066 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:13.935073 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:13.935131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:13.960415 1078428 cri.go:89] found id: ""
	I1210 07:57:13.960440 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.960449 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:13.960455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:13.960538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:13.985917 1078428 cri.go:89] found id: ""
	I1210 07:57:13.985964 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.985974 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:13.985983 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:13.985995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:14.046091 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:14.046336 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:14.068485 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:14.068513 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:14.145212 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:14.145235 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:14.145248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:14.170375 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:14.170409 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:16.699528 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:16.710231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:16.710301 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:16.734299 1078428 cri.go:89] found id: ""
	I1210 07:57:16.734325 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.734333 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:16.734339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:16.734402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:16.759890 1078428 cri.go:89] found id: ""
	I1210 07:57:16.759916 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.759925 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:16.759934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:16.760017 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:16.788155 1078428 cri.go:89] found id: ""
	I1210 07:57:16.788181 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.788191 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:16.788197 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:16.788256 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:16.817801 1078428 cri.go:89] found id: ""
	I1210 07:57:16.817828 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.817837 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:16.817844 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:16.817904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:16.845878 1078428 cri.go:89] found id: ""
	I1210 07:57:16.845905 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.845913 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:16.845919 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:16.845975 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:16.873613 1078428 cri.go:89] found id: ""
	I1210 07:57:16.873641 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.873651 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:16.873658 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:16.873719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:16.898666 1078428 cri.go:89] found id: ""
	I1210 07:57:16.898689 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.898698 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:16.898704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:16.898762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:16.922533 1078428 cri.go:89] found id: ""
	I1210 07:57:16.922560 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.922569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:16.922579 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:16.922591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:16.948298 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:16.948341 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:16.976671 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:16.976699 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:17.033642 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:17.033681 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:17.052529 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:17.052568 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:17.131312 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:19.632225 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:19.644243 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:19.644343 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:19.682502 1078428 cri.go:89] found id: ""
	I1210 07:57:19.682536 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.682546 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:19.682553 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:19.682615 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:19.709431 1078428 cri.go:89] found id: ""
	I1210 07:57:19.709455 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.709464 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:19.709470 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:19.709532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:19.739384 1078428 cri.go:89] found id: ""
	I1210 07:57:19.739426 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.739436 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:19.739442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:19.739502 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:19.767244 1078428 cri.go:89] found id: ""
	I1210 07:57:19.767266 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.767274 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:19.767281 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:19.767338 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:19.802183 1078428 cri.go:89] found id: ""
	I1210 07:57:19.802207 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.802216 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:19.802222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:19.802283 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:19.864351 1078428 cri.go:89] found id: ""
	I1210 07:57:19.864373 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.864381 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:19.864388 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:19.864446 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:19.923313 1078428 cri.go:89] found id: ""
	I1210 07:57:19.923336 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.923344 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:19.923350 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:19.923412 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:19.956689 1078428 cri.go:89] found id: ""
	I1210 07:57:19.956768 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.956792 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:19.956836 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:19.956870 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:20.020110 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:20.020150 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:20.041105 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:20.041136 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:20.171782 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:20.151749   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.157532   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.158302   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161399   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161675   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:20.151749   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.157532   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.158302   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161399   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161675   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:20.171803 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:20.171817 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:20.212388 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:20.212467 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:22.753904 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:22.771857 1078428 out.go:203] 
	W1210 07:57:22.774733 1078428 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:57:22.774767 1078428 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:57:22.774778 1078428 out.go:285] * Related issues:
	* Related issues:
	W1210 07:57:22.774790 1078428 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:57:22.774803 1078428 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:57:22.777684 1078428 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-237317
helpers_test.go:244: (dbg) docker inspect newest-cni-237317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	        "Created": "2025-12-10T07:41:27.764165056Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1078597,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:51:14.851297935Z",
	            "FinishedAt": "2025-12-10T07:51:13.296430701Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hosts",
	        "LogPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d-json.log",
	        "Name": "/newest-cni-237317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-237317:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-237317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	                "LowerDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-237317",
	                "Source": "/var/lib/docker/volumes/newest-cni-237317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-237317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-237317",
	                "name.minikube.sigs.k8s.io": "newest-cni-237317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ce3a28f31774fef443c63794bb8a81b083cde3dd4d8dbf17e6f4c44906e905a",
	            "SandboxKey": "/var/run/docker/netns/1ce3a28f3177",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-237317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:6f:71:0d:8d:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8181aebce826300f2c9eb8f48208470a68f1816a212863fa9c220fbbaa29953b",
	                    "EndpointID": "c0800f293b750ff5d10633caea6a666c9ca543920cb52ef2db3d40a6e4851b98",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-237317",
	                        "a3bfe8c2955a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (364.295138ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-237317 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-237317 logs -n 25: (1.962385407s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:39 UTC │
	│ stop    │ -p embed-certs-254586 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:39 UTC │ 10 Dec 25 07:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-254586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-237317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ stop    │ -p no-preload-587009 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ addons  │ enable dashboard -p no-preload-587009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │                     │
	│ stop    │ -p newest-cni-237317 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-237317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:51:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:51:14.495415 1078428 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:51:14.495519 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495524 1078428 out.go:374] Setting ErrFile to fd 2...
	I1210 07:51:14.495529 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495772 1078428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:51:14.496198 1078428 out.go:368] Setting JSON to false
	I1210 07:51:14.497022 1078428 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23599,"bootTime":1765329476,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:51:14.497081 1078428 start.go:143] virtualization:  
	I1210 07:51:14.500489 1078428 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:51:14.503586 1078428 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:51:14.503671 1078428 notify.go:221] Checking for updates...
	I1210 07:51:14.509469 1078428 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:51:14.512370 1078428 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:14.515169 1078428 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:51:14.518012 1078428 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:51:14.520797 1078428 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:51:14.527169 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:14.527731 1078428 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:51:14.566042 1078428 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:51:14.566172 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.628663 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.618086592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.628767 1078428 docker.go:319] overlay module found
	I1210 07:51:14.631981 1078428 out.go:179] * Using the docker driver based on existing profile
	I1210 07:51:14.634809 1078428 start.go:309] selected driver: docker
	I1210 07:51:14.634833 1078428 start.go:927] validating driver "docker" against &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.634946 1078428 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:51:14.635637 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.728404 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.713293715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.728788 1078428 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:51:14.728810 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:14.728854 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:14.728892 1078428 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.732274 1078428 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:51:14.735049 1078428 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:51:14.738088 1078428 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:51:14.740969 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:14.741011 1078428 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:51:14.741020 1078428 cache.go:65] Caching tarball of preloaded images
	I1210 07:51:14.741100 1078428 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:51:14.741110 1078428 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:51:14.741232 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:14.741437 1078428 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:51:14.763634 1078428 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:51:14.763653 1078428 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:51:14.763668 1078428 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:51:14.763698 1078428 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:14.763755 1078428 start.go:364] duration metric: took 40.304µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:51:14.763774 1078428 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:51:14.763779 1078428 fix.go:54] fixHost starting: 
	I1210 07:51:14.764055 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:14.807148 1078428 fix.go:112] recreateIfNeeded on newest-cni-237317: state=Stopped err=<nil>
	W1210 07:51:14.807188 1078428 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:51:10.742298 1077343 out.go:252] * Restarting existing docker container for "no-preload-587009" ...
	I1210 07:51:10.742407 1077343 cli_runner.go:164] Run: docker start no-preload-587009
	I1210 07:51:11.039727 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:11.064793 1077343 kic.go:430] container "no-preload-587009" state is running.
	I1210 07:51:11.065794 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:11.090953 1077343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json ...
	I1210 07:51:11.091180 1077343 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:11.091248 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:11.118540 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:11.118875 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:11.118891 1077343 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:11.119530 1077343 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38028->127.0.0.1:33840: read: connection reset by peer
	I1210 07:51:14.269979 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.270011 1077343 ubuntu.go:182] provisioning hostname "no-preload-587009"
	I1210 07:51:14.270115 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.295536 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.295890 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.295901 1077343 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-587009 && echo "no-preload-587009" | sudo tee /etc/hostname
	I1210 07:51:14.452920 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.453011 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.478828 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.479134 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.479150 1077343 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-587009' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-587009/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-587009' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:14.626210 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:14.626250 1077343 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:14.626279 1077343 ubuntu.go:190] setting up certificates
	I1210 07:51:14.626296 1077343 provision.go:84] configureAuth start
	I1210 07:51:14.626367 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:14.653396 1077343 provision.go:143] copyHostCerts
	I1210 07:51:14.653479 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:14.653501 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:14.653585 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:14.653695 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:14.653708 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:14.653739 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:14.653813 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:14.653823 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:14.653849 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:14.653913 1077343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.no-preload-587009 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-587009]
	I1210 07:51:14.987883 1077343 provision.go:177] copyRemoteCerts
	I1210 07:51:14.987956 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:14.988006 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.016190 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.122129 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:15.168648 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:15.209293 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:15.238881 1077343 provision.go:87] duration metric: took 612.568009ms to configureAuth
	I1210 07:51:15.238905 1077343 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:15.239106 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:15.239113 1077343 machine.go:97] duration metric: took 4.147925818s to provisionDockerMachine
	I1210 07:51:15.239121 1077343 start.go:293] postStartSetup for "no-preload-587009" (driver="docker")
	I1210 07:51:15.239133 1077343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:15.239186 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:15.239227 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.259116 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.370554 1077343 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:15.375386 1077343 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:15.375413 1077343 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:15.375424 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:15.375477 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:15.375560 1077343 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:15.375669 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:15.386817 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:15.415888 1077343 start.go:296] duration metric: took 176.733864ms for postStartSetup
	I1210 07:51:15.416018 1077343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:15.416065 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.439058 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.548495 1077343 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:15.553596 1077343 fix.go:56] duration metric: took 4.831668845s for fixHost
	I1210 07:51:15.553633 1077343 start.go:83] releasing machines lock for "no-preload-587009", held for 4.831730515s
	I1210 07:51:15.553722 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:15.586973 1077343 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:15.587034 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.587329 1077343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:15.587396 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.629146 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.634697 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.746290 1077343 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:15.838801 1077343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:15.843040 1077343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:15.843111 1077343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:15.851174 1077343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:15.851245 1077343 start.go:496] detecting cgroup driver to use...
	I1210 07:51:15.851294 1077343 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:15.851351 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:15.869860 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:15.883702 1077343 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:15.883777 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:15.899664 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:15.913011 1077343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:16.034801 1077343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:16.150617 1077343 docker.go:234] disabling docker service ...
	I1210 07:51:16.150759 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:16.165840 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:16.180309 1077343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:16.307789 1077343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:16.432072 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:16.444962 1077343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:16.459040 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:16.467874 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:16.476775 1077343 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:16.476842 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:16.485489 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.494113 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:16.502936 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.511763 1077343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:16.519893 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:16.528779 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:16.537342 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:16.546138 1077343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:16.553912 1077343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:16.561714 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:16.748597 1077343 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:16.865266 1077343 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:16.865408 1077343 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:16.869450 1077343 start.go:564] Will wait 60s for crictl version
	I1210 07:51:16.869562 1077343 ssh_runner.go:195] Run: which crictl
	I1210 07:51:16.873018 1077343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:16.900099 1077343 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:16.900218 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.923700 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.947379 1077343 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:16.950227 1077343 cli_runner.go:164] Run: docker network inspect no-preload-587009 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:16.965229 1077343 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:16.969175 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:16.978619 1077343 kubeadm.go:884] updating cluster {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:16.978743 1077343 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:16.978798 1077343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:17.014301 1077343 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:17.014333 1077343 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:17.014341 1077343 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:17.014532 1077343 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-587009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:17.014625 1077343 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:17.044039 1077343 cni.go:84] Creating CNI manager for ""
	I1210 07:51:17.044060 1077343 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:17.044082 1077343 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:51:17.044104 1077343 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-587009 NodeName:no-preload-587009 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:17.044222 1077343 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-587009"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:17.044289 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:17.052024 1077343 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:17.052101 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:17.059722 1077343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:17.072494 1077343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:17.086253 1077343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 07:51:17.099376 1077343 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:17.102883 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:17.112330 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:17.225530 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:17.246996 1077343 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009 for IP: 192.168.85.2
	I1210 07:51:17.247021 1077343 certs.go:195] generating shared ca certs ...
	I1210 07:51:17.247038 1077343 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.247186 1077343 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:17.247238 1077343 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:17.247248 1077343 certs.go:257] generating profile certs ...
	I1210 07:51:17.247347 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.key
	I1210 07:51:17.247407 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key.841ee17a
	I1210 07:51:17.247454 1077343 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key
	I1210 07:51:17.247566 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:17.247604 1077343 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:17.247617 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:17.247646 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:17.247674 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:17.247712 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:17.247768 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:17.248384 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:17.265969 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:17.284190 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:17.302881 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:17.324073 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:17.341990 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:51:17.359614 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:17.377843 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:51:17.395426 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:17.413039 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:17.430522 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:17.447821 1077343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:17.460777 1077343 ssh_runner.go:195] Run: openssl version
	I1210 07:51:17.467243 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.474706 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:17.482273 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.485950 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.486025 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.526902 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:17.534224 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.541448 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:17.549037 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552765 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552832 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.595755 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:17.603128 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.610926 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:17.618981 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622497 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622563 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.663609 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:17.670957 1077343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:17.674676 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:17.715746 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:17.758195 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:17.799081 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:17.840047 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:17.880964 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:17.921878 1077343 kubeadm.go:401] StartCluster: {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:17.921988 1077343 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:17.922092 1077343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:17.951649 1077343 cri.go:89] found id: ""
	I1210 07:51:17.951796 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:17.959534 1077343 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:17.959555 1077343 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:17.959635 1077343 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:17.966920 1077343 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:17.967331 1077343 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.967425 1077343 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-587009" cluster setting kubeconfig missing "no-preload-587009" context setting]
	I1210 07:51:17.967687 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.968903 1077343 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:17.977669 1077343 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:51:17.977707 1077343 kubeadm.go:602] duration metric: took 18.146766ms to restartPrimaryControlPlane
	I1210 07:51:17.977718 1077343 kubeadm.go:403] duration metric: took 55.849318ms to StartCluster
	I1210 07:51:17.977733 1077343 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.977796 1077343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.978427 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.978652 1077343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:17.978958 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:17.979006 1077343 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:17.979072 1077343 addons.go:70] Setting storage-provisioner=true in profile "no-preload-587009"
	I1210 07:51:17.979085 1077343 addons.go:239] Setting addon storage-provisioner=true in "no-preload-587009"
	I1210 07:51:17.979106 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979123 1077343 addons.go:70] Setting dashboard=true in profile "no-preload-587009"
	I1210 07:51:17.979139 1077343 addons.go:239] Setting addon dashboard=true in "no-preload-587009"
	W1210 07:51:17.979155 1077343 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:17.979179 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979564 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.979606 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.982091 1077343 addons.go:70] Setting default-storageclass=true in profile "no-preload-587009"
	I1210 07:51:17.982247 1077343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-587009"
	I1210 07:51:17.983173 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.984528 1077343 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:17.987357 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:18.030694 1077343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:18.030828 1077343 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:18.034622 1077343 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:14.810511 1078428 out.go:252] * Restarting existing docker container for "newest-cni-237317" ...
	I1210 07:51:14.810602 1078428 cli_runner.go:164] Run: docker start newest-cni-237317
	I1210 07:51:15.140257 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:15.163514 1078428 kic.go:430] container "newest-cni-237317" state is running.
	I1210 07:51:15.165120 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:15.200178 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:15.200425 1078428 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:15.200484 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:15.234652 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:15.234972 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:15.234980 1078428 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:15.238112 1078428 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:51:18.394621 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.394726 1078428 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:51:18.394818 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.424081 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.424400 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.424411 1078428 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:51:18.589360 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.589454 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.613196 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.613511 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.613536 1078428 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:18.750663 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:18.750693 1078428 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:18.750726 1078428 ubuntu.go:190] setting up certificates
	I1210 07:51:18.750745 1078428 provision.go:84] configureAuth start
	I1210 07:51:18.750808 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:18.768151 1078428 provision.go:143] copyHostCerts
	I1210 07:51:18.768234 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:18.768250 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:18.768328 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:18.768450 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:18.768462 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:18.768492 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:18.768566 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:18.768583 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:18.768617 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:18.768682 1078428 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:51:19.084729 1078428 provision.go:177] copyRemoteCerts
	I1210 07:51:19.084804 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:19.084849 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.104109 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.203019 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:19.223435 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:19.240802 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:19.257611 1078428 provision.go:87] duration metric: took 506.840522ms to configureAuth
	I1210 07:51:19.257643 1078428 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:19.257850 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:19.257864 1078428 machine.go:97] duration metric: took 4.057430572s to provisionDockerMachine
	I1210 07:51:19.257873 1078428 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:51:19.257887 1078428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:19.257947 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:19.257992 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.274867 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.371336 1078428 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:19.375463 1078428 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:19.375497 1078428 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:19.375509 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:19.375559 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:19.375641 1078428 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:19.375745 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:19.386080 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:19.406230 1078428 start.go:296] duration metric: took 148.339109ms for postStartSetup
	I1210 07:51:19.406314 1078428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:19.406379 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.424523 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:18.034780 1077343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.034793 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:18.034874 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.037543 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:18.037568 1077343 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:18.037639 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.041604 1077343 addons.go:239] Setting addon default-storageclass=true in "no-preload-587009"
	I1210 07:51:18.041645 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:18.042060 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:18.105147 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.114730 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.115497 1077343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.115511 1077343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:18.115563 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.135449 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.230094 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:18.264441 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.283658 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:18.283729 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:18.329062 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:18.329133 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:18.353549 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:18.353629 1077343 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:18.357622 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.376127 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:18.376202 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:18.447999 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:18.448021 1077343 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:18.470186 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:18.470208 1077343 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:18.489233 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:18.489255 1077343 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:18.503805 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:18.503828 1077343 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:18.521545 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:18.521566 1077343 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:18.536611 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.053453 1077343 node_ready.go:35] waiting up to 6m0s for node "no-preload-587009" to be "Ready" ...
	W1210 07:51:19.053800 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053834 1077343 retry.go:31] will retry after 261.467752ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.053883 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053894 1077343 retry.go:31] will retry after 368.94912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.054089 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.054104 1077343 retry.go:31] will retry after 338.426434ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.315446 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.382015 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.382044 1077343 retry.go:31] will retry after 337.060159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.393358 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.424101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:19.491743 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.491780 1077343 retry.go:31] will retry after 471.881278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.538786 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.538838 1077343 retry.go:31] will retry after 528.879721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.719721 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.790713 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.790742 1077343 retry.go:31] will retry after 510.29035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.964160 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:20.068233 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:20.070746 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.070792 1077343 retry.go:31] will retry after 543.265245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.148457 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.148492 1077343 retry.go:31] will retry after 460.630823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.301882 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:20.397427 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.397476 1077343 retry.go:31] will retry after 801.303312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.524843 1078428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:19.530920 1078428 fix.go:56] duration metric: took 4.767134196s for fixHost
	I1210 07:51:19.530943 1078428 start.go:83] releasing machines lock for "newest-cni-237317", held for 4.767180038s
	I1210 07:51:19.531010 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:19.550838 1078428 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:19.550877 1078428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:19.550890 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.550934 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.570871 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.573219 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.666233 1078428 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:19.757488 1078428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:19.762554 1078428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:19.762646 1078428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:19.772614 1078428 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:19.772688 1078428 start.go:496] detecting cgroup driver to use...
	I1210 07:51:19.772735 1078428 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:19.772810 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:19.790830 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:19.808563 1078428 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:19.808685 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:19.825219 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:19.839550 1078428 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:19.957848 1078428 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:20.106011 1078428 docker.go:234] disabling docker service ...
	I1210 07:51:20.106089 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:20.124597 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:20.139030 1078428 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:20.264730 1078428 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:20.405057 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:20.418041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:20.434060 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:20.443707 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:20.453162 1078428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:20.453287 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:20.462485 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.471477 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:20.480685 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.489771 1078428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:20.498259 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:20.507883 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:20.516803 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:20.525782 1078428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:20.533254 1078428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:20.540718 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:20.693669 1078428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:20.831153 1078428 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:20.831249 1078428 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:20.835049 1078428 start.go:564] Will wait 60s for crictl version
	I1210 07:51:20.835127 1078428 ssh_runner.go:195] Run: which crictl
	I1210 07:51:20.838628 1078428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:20.863125 1078428 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:20.863217 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.884709 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.910533 1078428 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:20.913646 1078428 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:20.930416 1078428 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:20.934716 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:20.948181 1078428 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:51:20.951046 1078428 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:20.951211 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:20.951303 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:20.976663 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:20.976691 1078428 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:51:20.976756 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:21.000721 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:21.000745 1078428 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:21.000753 1078428 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:21.000851 1078428 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:21.000919 1078428 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:21.027129 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:21.027160 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:21.027182 1078428 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:51:21.027206 1078428 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:21.027326 1078428 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:21.027402 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:21.035339 1078428 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:21.035477 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:21.043040 1078428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:21.056144 1078428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:21.068486 1078428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:51:21.080830 1078428 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:21.084334 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:21.093747 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:21.227754 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:21.255098 1078428 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:51:21.255120 1078428 certs.go:195] generating shared ca certs ...
	I1210 07:51:21.255146 1078428 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:21.255299 1078428 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:21.255358 1078428 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:21.255372 1078428 certs.go:257] generating profile certs ...
	I1210 07:51:21.255486 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:51:21.255553 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:51:21.255599 1078428 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:51:21.255719 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:21.255759 1078428 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:21.255770 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:21.255801 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:21.255838 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:21.255870 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:21.255919 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:21.256545 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:21.311093 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:21.352581 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:21.373410 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:21.394506 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:21.429692 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:51:21.462387 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:21.492668 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:51:21.520168 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:21.538625 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:21.556477 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:21.574823 1078428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:21.587970 1078428 ssh_runner.go:195] Run: openssl version
	I1210 07:51:21.594082 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.601606 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:21.609233 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613206 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613303 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.655122 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:21.662415 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.669633 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:21.677051 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680913 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680973 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.722892 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:21.730172 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.737341 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:21.744828 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748681 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748767 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.790554 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:21.797952 1078428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:21.801618 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:21.842558 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:21.883251 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:21.924099 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:21.965360 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:22.007244 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:22.049094 1078428 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:22.049233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:22.049334 1078428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:22.093879 1078428 cri.go:89] found id: ""
	I1210 07:51:22.094034 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:22.108858 1078428 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:22.108920 1078428 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:22.109002 1078428 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:22.119866 1078428 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:22.120478 1078428 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-237317" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.120794 1078428 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-237317" cluster setting kubeconfig missing "newest-cni-237317" context setting]
	I1210 07:51:22.121355 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.123034 1078428 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:22.139211 1078428 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:51:22.139284 1078428 kubeadm.go:602] duration metric: took 30.344057ms to restartPrimaryControlPlane
	I1210 07:51:22.139309 1078428 kubeadm.go:403] duration metric: took 90.22699ms to StartCluster
	I1210 07:51:22.139351 1078428 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.139430 1078428 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.140615 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.141197 1078428 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:22.141378 1078428 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:22.149299 1078428 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-237317"
	I1210 07:51:22.149322 1078428 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-237317"
	I1210 07:51:22.149353 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.149966 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.141985 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:22.150417 1078428 addons.go:70] Setting dashboard=true in profile "newest-cni-237317"
	I1210 07:51:22.150441 1078428 addons.go:239] Setting addon dashboard=true in "newest-cni-237317"
	W1210 07:51:22.150449 1078428 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:22.150502 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.151022 1078428 addons.go:70] Setting default-storageclass=true in profile "newest-cni-237317"
	I1210 07:51:22.151064 1078428 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-237317"
	I1210 07:51:22.151139 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.151406 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.154353 1078428 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:22.159801 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:22.209413 1078428 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:22.216779 1078428 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.216810 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:22.216899 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.223328 1078428 addons.go:239] Setting addon default-storageclass=true in "newest-cni-237317"
	I1210 07:51:22.223372 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.223787 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.224255 1078428 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:22.227259 1078428 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:22.230643 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:22.230670 1078428 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:22.230738 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.262205 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.304886 1078428 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:22.304913 1078428 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:22.305020 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.320571 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.350629 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.414331 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.428355 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:22.476480 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:22.476506 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:22.499604 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.511381 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.511434 1078428 retry.go:31] will retry after 354.449722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.512377 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:22.512398 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:22.525695 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:22.525721 1078428 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:22.549890 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:22.549921 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:22.571318 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:22.571360 1078428 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:22.590078 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:22.590107 1078428 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:22.605317 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:22.605341 1078428 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:22.618168 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:22.618200 1078428 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:22.632058 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.632138 1078428 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:22.645108 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.866802 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:23.047272 1078428 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:51:23.047355 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:23.047482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047505 1078428 retry.go:31] will retry after 239.047353ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047709 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047727 1078428 retry.go:31] will retry after 188.716917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047796 1078428 retry.go:31] will retry after 517.712293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.237633 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:23.287256 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.302152 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.302252 1078428 retry.go:31] will retry after 469.586518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.346821 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.346867 1078428 retry.go:31] will retry after 517.463027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.548102 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.566734 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:23.638131 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.638161 1078428 retry.go:31] will retry after 398.122111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.772509 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.859471 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.859510 1078428 retry.go:31] will retry after 826.751645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.865483 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.933950 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.933981 1078428 retry.go:31] will retry after 776.320293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.037254 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:24.047892 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:24.103304 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.103348 1078428 retry.go:31] will retry after 781.805737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.609734 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:20.615162 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:20.763154 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763202 1077343 retry.go:31] will retry after 629.698549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.763322 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763340 1077343 retry.go:31] will retry after 624.408887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.054168 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:21.199599 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:21.288128 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.288156 1077343 retry.go:31] will retry after 1.429543278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.388486 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:21.393905 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:21.513396 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.513426 1077343 retry.go:31] will retry after 1.363983036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.522339 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.522370 1077343 retry.go:31] will retry after 1.881789089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.718226 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:22.784732 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.784765 1077343 retry.go:31] will retry after 2.14784628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.877998 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.948118 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.948146 1077343 retry.go:31] will retry after 2.832610868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:23.404396 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.467879 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.467914 1077343 retry.go:31] will retry after 2.135960827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.933362 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.999854 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.999895 1077343 retry.go:31] will retry after 3.6382738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.548307 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.687434 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:24.711319 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:24.773539 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.773577 1078428 retry.go:31] will retry after 997.771985ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:24.790786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.790863 1078428 retry.go:31] will retry after 982.839582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.886098 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.963470 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.963508 1078428 retry.go:31] will retry after 1.65409552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.047816 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.547590 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.771778 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:25.774151 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.936732 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.936801 1078428 retry.go:31] will retry after 1.015181303s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:25.947734 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.947767 1078428 retry.go:31] will retry after 1.482437442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.048146 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.547461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.617808 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:26.678401 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.678435 1078428 retry.go:31] will retry after 1.557494695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.952842 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.019482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.019568 1078428 retry.go:31] will retry after 1.273355747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.047573 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.431325 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:27.498014 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.498046 1078428 retry.go:31] will retry after 1.046464225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.548153 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.236708 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:28.293309 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:28.313086 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.313117 1078428 retry.go:31] will retry after 2.925748723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.376082 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.376136 1078428 retry.go:31] will retry after 3.458373128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.545585 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:28.548098 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:28.611335 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.611369 1078428 retry.go:31] will retry after 3.856495335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.047665 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:25.554994 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:25.604337 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:25.669224 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.669262 1077343 retry.go:31] will retry after 2.194006804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.781321 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.929708 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.929740 1077343 retry.go:31] will retry after 3.276039002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.863966 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.927673 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.927709 1077343 retry.go:31] will retry after 5.303571514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.054575 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:28.639292 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:28.698653 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.698686 1077343 retry.go:31] will retry after 3.005783671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.206806 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:29.264930 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.264960 1077343 retry.go:31] will retry after 2.489245949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.547947 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.047725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.548382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.048336 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.239688 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:31.305382 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.305411 1078428 retry.go:31] will retry after 5.48588333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.547900 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.835667 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:31.907250 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.907288 1078428 retry.go:31] will retry after 3.413940388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.047433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.468741 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:32.529582 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.529616 1078428 retry.go:31] will retry after 2.765741211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.547808 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.048388 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.547638 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.048299 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:30.554528 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:31.705403 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:31.754983 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:31.764053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.764088 1077343 retry.go:31] will retry after 6.263299309s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:31.824900 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.824937 1077343 retry.go:31] will retry after 8.063912103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:32.554572 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:33.232049 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:33.291801 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:33.291838 1077343 retry.go:31] will retry after 5.361341065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:34.554757 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:34.547845 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.048329 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.295932 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:35.322379 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:35.361522 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.361555 1078428 retry.go:31] will retry after 3.648316362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:35.394430 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.394485 1078428 retry.go:31] will retry after 5.549499405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.547462 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.048235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.547640 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.792053 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:36.857078 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:36.857110 1078428 retry.go:31] will retry after 8.697501731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:37.048326 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.548396 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.047529 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.547464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.010651 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:39.048217 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:39.071638 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.071669 1078428 retry.go:31] will retry after 13.355816146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:37.053891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:38.027881 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:38.116733 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.116768 1077343 retry.go:31] will retry after 12.105620641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.653613 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:38.715053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.715087 1077343 retry.go:31] will retry after 11.375750542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:39.554885 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:39.889521 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:39.947993 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.948032 1077343 retry.go:31] will retry after 6.34767532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.547555 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.048271 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.548333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.944176 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:41.005827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.005869 1078428 retry.go:31] will retry after 6.58383212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.047819 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:41.547642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.048470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.547646 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.047482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.548313 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:44.048345 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:42.054758 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:44.554149 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:44.547780 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.048251 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.547682 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.555791 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:45.648631 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:45.648667 1078428 retry.go:31] will retry after 11.694093059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.048267 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:46.547745 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.047711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.547488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.590140 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:47.657175 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:47.657216 1078428 retry.go:31] will retry after 17.707179987s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:48.047554 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.547523 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:49.048229 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:46.296554 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:46.375385 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.375418 1077343 retry.go:31] will retry after 17.860418691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:47.054540 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:49.054867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:50.091584 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:50.153219 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.153253 1077343 retry.go:31] will retry after 15.008999648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.223406 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:50.279259 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.279296 1077343 retry.go:31] will retry after 9.416080018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:49.547855 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.048310 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.547470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.048482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.547803 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.048220 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.428493 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:52.490932 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.490967 1078428 retry.go:31] will retry after 16.825164958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.548145 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.047509 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.548344 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.047578 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:51.553954 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:54.547773 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.047551 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.547690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.047804 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.547512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.048500 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.343638 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:57.401827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.401862 1078428 retry.go:31] will retry after 12.086669618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.548118 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.547566 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:59.047512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:56.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:58.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:59.696250 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:59.757338 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:59.757373 1077343 retry.go:31] will retry after 26.778697297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:59.547820 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.048277 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.547702 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.047690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.548160 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.047532 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.547658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.048174 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.547494 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:04.047488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:01.054130 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:03.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:04.236888 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:04.303052 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:04.303083 1077343 retry.go:31] will retry after 25.859676141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.163286 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.227326 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.227361 1077343 retry.go:31] will retry after 29.528693098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:04.547752 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.047684 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.364684 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.426426 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.426483 1078428 retry.go:31] will retry after 20.310563443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.547649 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.547647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.048386 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.548191 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.047499 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.547510 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.047557 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.316912 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:09.386785 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.386818 1078428 retry.go:31] will retry after 17.689212788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.489070 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:06.053981 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:08.554858 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:09.547482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:09.552880 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.552917 1078428 retry.go:31] will retry after 27.483688335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:10.047697 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:10.548124 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.047626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.548296 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.048335 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.548247 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.047495 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.547530 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:14.047549 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:11.053980 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:13.054863 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:15.055109 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:14.547736 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.548227 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.047516 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.548114 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.047567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.547679 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.048185 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.548203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:19.047660 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:17.055513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:19.553887 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:19.547978 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.048384 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.548389 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.048134 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.547434 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.048274 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.547540 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:22.547641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:22.572419 1078428 cri.go:89] found id: ""
	I1210 07:52:22.572446 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.572457 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:22.572464 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:22.572530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:22.596895 1078428 cri.go:89] found id: ""
	I1210 07:52:22.596923 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.596931 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:22.596938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:22.597000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:22.621678 1078428 cri.go:89] found id: ""
	I1210 07:52:22.621705 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.621713 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:22.621720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:22.621783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:22.646160 1078428 cri.go:89] found id: ""
	I1210 07:52:22.646188 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.646198 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:22.646205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:22.646270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:22.671641 1078428 cri.go:89] found id: ""
	I1210 07:52:22.671670 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.671680 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:22.671686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:22.671750 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:22.697149 1078428 cri.go:89] found id: ""
	I1210 07:52:22.697177 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.697187 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:22.697194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:22.697255 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:22.722276 1078428 cri.go:89] found id: ""
	I1210 07:52:22.722300 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.722318 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:22.722324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:22.722388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:22.751396 1078428 cri.go:89] found id: ""
	I1210 07:52:22.751422 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.751431 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:22.751440 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:22.751452 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:22.806571 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:22.806611 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:22.824584 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:22.824623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:22.902683 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:22.902704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:22.902719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:22.928289 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:22.928326 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:52:21.554922 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:24.054424 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:25.461464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:25.472201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:25.472303 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:25.498226 1078428 cri.go:89] found id: ""
	I1210 07:52:25.498253 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.498263 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:25.498269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:25.498331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:25.524731 1078428 cri.go:89] found id: ""
	I1210 07:52:25.524759 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.524777 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:25.524789 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:25.524855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:25.554155 1078428 cri.go:89] found id: ""
	I1210 07:52:25.554178 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.554187 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:25.554194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:25.554252 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:25.580553 1078428 cri.go:89] found id: ""
	I1210 07:52:25.580584 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.580593 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:25.580599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:25.580669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:25.606241 1078428 cri.go:89] found id: ""
	I1210 07:52:25.606309 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.606341 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:25.606369 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:25.606449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:25.630882 1078428 cri.go:89] found id: ""
	I1210 07:52:25.630912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.630921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:25.630928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:25.631028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:25.657178 1078428 cri.go:89] found id: ""
	I1210 07:52:25.657207 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.657215 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:25.657221 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:25.657282 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:25.686580 1078428 cri.go:89] found id: ""
	I1210 07:52:25.686604 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.686612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:25.686622 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:25.686634 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:25.737209 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:52:25.742985 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:25.743060 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:52:25.816909 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.817156 1078428 retry.go:31] will retry after 25.212576039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.818420 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:25.818454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:25.889855 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:25.889919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:25.889939 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:25.915022 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:25.915058 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:27.076870 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:27.134892 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:27.134924 1078428 retry.go:31] will retry after 48.20102621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:28.443268 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:28.454097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:28.454172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:28.482759 1078428 cri.go:89] found id: ""
	I1210 07:52:28.482789 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.482798 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:28.482805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:28.482868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:28.507737 1078428 cri.go:89] found id: ""
	I1210 07:52:28.507760 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.507769 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:28.507775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:28.507836 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:28.532881 1078428 cri.go:89] found id: ""
	I1210 07:52:28.532907 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.532916 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:28.532923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:28.532989 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:28.562425 1078428 cri.go:89] found id: ""
	I1210 07:52:28.562451 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.562460 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:28.562489 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:28.562551 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:28.587926 1078428 cri.go:89] found id: ""
	I1210 07:52:28.587952 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.587961 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:28.587967 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:28.588026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:28.613523 1078428 cri.go:89] found id: ""
	I1210 07:52:28.613593 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.613617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:28.613638 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:28.613730 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:28.637796 1078428 cri.go:89] found id: ""
	I1210 07:52:28.637864 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.637888 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:28.637907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:28.637993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:28.666907 1078428 cri.go:89] found id: ""
	I1210 07:52:28.666937 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.666946 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:28.666956 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:28.666968 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:28.722569 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:28.722604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:28.738517 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:28.738592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:28.814307 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:28.814366 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:28.814395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:28.842824 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:28.842905 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:26.536333 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:26.554155 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:26.621759 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:26.621788 1077343 retry.go:31] will retry after 32.881374862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:29.054917 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:30.163626 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:30.226039 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:30.226073 1077343 retry.go:31] will retry after 27.175178767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:31.380548 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:31.391083 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:31.391159 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:31.416470 1078428 cri.go:89] found id: ""
	I1210 07:52:31.416496 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.416504 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:31.416510 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:31.416570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:31.441740 1078428 cri.go:89] found id: ""
	I1210 07:52:31.441767 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.441776 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:31.441782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:31.441843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:31.465834 1078428 cri.go:89] found id: ""
	I1210 07:52:31.465860 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.465869 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:31.465875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:31.465935 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:31.492061 1078428 cri.go:89] found id: ""
	I1210 07:52:31.492085 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.492093 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:31.492099 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:31.492177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:31.515891 1078428 cri.go:89] found id: ""
	I1210 07:52:31.515971 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.515993 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:31.516010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:31.516096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:31.540039 1078428 cri.go:89] found id: ""
	I1210 07:52:31.540061 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.540069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:31.540076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:31.540169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:31.565345 1078428 cri.go:89] found id: ""
	I1210 07:52:31.565372 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.565388 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:31.565395 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:31.565513 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:31.590011 1078428 cri.go:89] found id: ""
	I1210 07:52:31.590035 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.590044 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:31.590074 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:31.590089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:31.656796 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:31.656816 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:31.656828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:31.681821 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:31.681855 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:31.709786 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:31.709815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:31.764688 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:31.764728 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.283681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:34.296241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:34.296314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:34.337179 1078428 cri.go:89] found id: ""
	I1210 07:52:34.337201 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.337210 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:34.337216 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:34.337274 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:34.369583 1078428 cri.go:89] found id: ""
	I1210 07:52:34.369611 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.369619 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:34.369625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:34.369683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:34.395566 1078428 cri.go:89] found id: ""
	I1210 07:52:34.395591 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.395600 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:34.395606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:34.395688 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:34.419610 1078428 cri.go:89] found id: ""
	I1210 07:52:34.419677 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.419702 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:34.419718 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:34.419797 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:34.444441 1078428 cri.go:89] found id: ""
	I1210 07:52:34.444511 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.444535 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:34.444550 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:34.444627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:34.469517 1078428 cri.go:89] found id: ""
	I1210 07:52:34.469540 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.469549 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:34.469556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:34.469618 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:34.494093 1078428 cri.go:89] found id: ""
	I1210 07:52:34.494120 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.494129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:34.494136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:34.494196 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	W1210 07:52:31.554771 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:34.054729 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:34.756990 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:34.831836 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:34.831956 1077343 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:34.518575 1078428 cri.go:89] found id: ""
	I1210 07:52:34.518658 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.518674 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:34.518685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:34.518698 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.534743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:34.534770 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:34.597542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:34.597564 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:34.597577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:34.622841 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:34.622876 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:34.653362 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:34.653395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.036872 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:37.117418 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.117451 1078428 retry.go:31] will retry after 42.271832156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.209642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:37.220263 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:37.220360 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:37.244517 1078428 cri.go:89] found id: ""
	I1210 07:52:37.244544 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.244552 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:37.244558 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:37.244619 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:37.269073 1078428 cri.go:89] found id: ""
	I1210 07:52:37.269099 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.269108 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:37.269114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:37.269175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:37.292561 1078428 cri.go:89] found id: ""
	I1210 07:52:37.292587 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.292596 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:37.292604 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:37.292661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:37.330286 1078428 cri.go:89] found id: ""
	I1210 07:52:37.330312 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.330321 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:37.330328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:37.330388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:37.362527 1078428 cri.go:89] found id: ""
	I1210 07:52:37.362555 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.362564 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:37.362570 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:37.362633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:37.387887 1078428 cri.go:89] found id: ""
	I1210 07:52:37.387912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.387921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:37.387927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:37.387988 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:37.412303 1078428 cri.go:89] found id: ""
	I1210 07:52:37.412329 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.412337 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:37.412344 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:37.412451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:37.436571 1078428 cri.go:89] found id: ""
	I1210 07:52:37.436596 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.436605 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:37.436614 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:37.436626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:37.462030 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:37.462074 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:37.489847 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:37.489875 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.545757 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:37.545792 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:37.561730 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:37.561763 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:37.627065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:52:36.554875 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:39.054027 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:40.127737 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:40.139792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:40.139876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:40.166917 1078428 cri.go:89] found id: ""
	I1210 07:52:40.166944 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.166952 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:40.166964 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:40.167028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:40.193972 1078428 cri.go:89] found id: ""
	I1210 07:52:40.194000 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.194009 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:40.194015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:40.194111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:40.226660 1078428 cri.go:89] found id: ""
	I1210 07:52:40.226693 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.226702 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:40.226709 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:40.226774 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:40.257013 1078428 cri.go:89] found id: ""
	I1210 07:52:40.257056 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.257067 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:40.257074 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:40.257140 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:40.282449 1078428 cri.go:89] found id: ""
	I1210 07:52:40.282500 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.282509 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:40.282516 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:40.282580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:40.332986 1078428 cri.go:89] found id: ""
	I1210 07:52:40.333018 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.333027 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:40.333050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:40.333188 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:40.366223 1078428 cri.go:89] found id: ""
	I1210 07:52:40.366258 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.366268 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:40.366275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:40.366347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:40.393136 1078428 cri.go:89] found id: ""
	I1210 07:52:40.393163 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.393171 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:40.393181 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:40.393193 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:40.422285 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:40.422314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:40.481326 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:40.481365 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:40.497675 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:40.497725 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:40.562074 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:40.562093 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:40.562106 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:43.088690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:43.099750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:43.099828 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:43.124516 1078428 cri.go:89] found id: ""
	I1210 07:52:43.124552 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.124561 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:43.124567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:43.124628 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:43.153325 1078428 cri.go:89] found id: ""
	I1210 07:52:43.153347 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.153356 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:43.153362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:43.153423 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:43.178405 1078428 cri.go:89] found id: ""
	I1210 07:52:43.178429 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.178437 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:43.178443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:43.178609 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:43.201768 1078428 cri.go:89] found id: ""
	I1210 07:52:43.201791 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.201800 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:43.201806 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:43.201865 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:43.225907 1078428 cri.go:89] found id: ""
	I1210 07:52:43.225931 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.225940 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:43.225946 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:43.226004 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:43.250803 1078428 cri.go:89] found id: ""
	I1210 07:52:43.250828 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.250837 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:43.250843 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:43.250916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:43.275081 1078428 cri.go:89] found id: ""
	I1210 07:52:43.275147 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.275161 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:43.275168 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:43.275245 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:43.306794 1078428 cri.go:89] found id: ""
	I1210 07:52:43.306827 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.306836 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:43.306845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:43.306857 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:43.337826 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:43.337854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:43.396050 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:43.396089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:43.413002 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:43.413031 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:43.479541 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:43.479565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:43.479578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:52:41.054361 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:43.054892 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:46.005454 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:46.017579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:46.017658 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:46.053539 1078428 cri.go:89] found id: ""
	I1210 07:52:46.053570 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.053579 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:46.053585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:46.053649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:46.088548 1078428 cri.go:89] found id: ""
	I1210 07:52:46.088572 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.088581 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:46.088596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:46.088660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:46.126497 1078428 cri.go:89] found id: ""
	I1210 07:52:46.126571 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.126594 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:46.126613 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:46.126734 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:46.150556 1078428 cri.go:89] found id: ""
	I1210 07:52:46.150626 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.150643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:46.150651 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:46.150719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:46.174996 1078428 cri.go:89] found id: ""
	I1210 07:52:46.175019 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.175027 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:46.175033 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:46.175107 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:46.199701 1078428 cri.go:89] found id: ""
	I1210 07:52:46.199726 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.199735 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:46.199742 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:46.199845 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:46.224632 1078428 cri.go:89] found id: ""
	I1210 07:52:46.224657 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.224666 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:46.224672 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:46.224752 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:46.248234 1078428 cri.go:89] found id: ""
	I1210 07:52:46.248259 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.248267 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:46.248277 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:46.248334 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:46.264183 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:46.264221 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:46.342979 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:46.343063 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:46.343092 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:46.369476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:46.369511 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:46.397302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:46.397339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.952567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.962857 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.962931 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.992562 1078428 cri.go:89] found id: ""
	I1210 07:52:48.992589 1078428 logs.go:282] 0 containers: []
	W1210 07:52:48.992599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.992606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.992671 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:49.018277 1078428 cri.go:89] found id: ""
	I1210 07:52:49.018303 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.018312 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:49.018318 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:49.018387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:49.045715 1078428 cri.go:89] found id: ""
	I1210 07:52:49.045743 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.045752 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:49.045758 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:49.045826 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:49.083318 1078428 cri.go:89] found id: ""
	I1210 07:52:49.083348 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.083358 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:49.083364 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:49.083422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:49.109936 1078428 cri.go:89] found id: ""
	I1210 07:52:49.109958 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.109966 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:49.109989 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:49.110049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:49.134580 1078428 cri.go:89] found id: ""
	I1210 07:52:49.134607 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.134617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:49.134623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:49.134681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:49.159828 1078428 cri.go:89] found id: ""
	I1210 07:52:49.159906 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.159924 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:49.159931 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:49.160011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:49.184837 1078428 cri.go:89] found id: ""
	I1210 07:52:49.184862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.184872 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:49.184881 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:49.184902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:49.210656 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:49.210691 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:49.241224 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:49.241256 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:49.303253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:49.303297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:49.319808 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:49.319838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:49.389423 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:52:45.554347 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:47.554702 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:50.054996 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:51.030067 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:51.093289 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:51.093415 1078428 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:51.889686 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.900249 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.900353 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.925533 1078428 cri.go:89] found id: ""
	I1210 07:52:51.925559 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.925567 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.925621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.925706 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.950161 1078428 cri.go:89] found id: ""
	I1210 07:52:51.950186 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.950194 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.950201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.950280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.976938 1078428 cri.go:89] found id: ""
	I1210 07:52:51.976964 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.976972 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.976979 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.977038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:52.006745 1078428 cri.go:89] found id: ""
	I1210 07:52:52.006841 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.006865 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:52.006887 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:52.007015 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:52.033557 1078428 cri.go:89] found id: ""
	I1210 07:52:52.033585 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.033595 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:52.033601 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:52.033672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:52.066821 1078428 cri.go:89] found id: ""
	I1210 07:52:52.066850 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.066860 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:52.066867 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:52.066929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:52.101024 1078428 cri.go:89] found id: ""
	I1210 07:52:52.101051 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.101060 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:52.101067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:52.101128 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:52.130045 1078428 cri.go:89] found id: ""
	I1210 07:52:52.130070 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.130079 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:52.130088 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:52.130100 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:52.184627 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:52.184662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:52.200733 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:52.200759 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:52.265577 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:52.265610 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:52.265626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:52.291354 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:52.291390 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:52:52.555048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:55.054639 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:54.834203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:54.845400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:54.845510 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.871357 1078428 cri.go:89] found id: ""
	I1210 07:52:54.871383 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.871392 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.871399 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.871463 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.897322 1078428 cri.go:89] found id: ""
	I1210 07:52:54.897352 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.897360 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.897366 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.897425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.922291 1078428 cri.go:89] found id: ""
	I1210 07:52:54.922320 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.922329 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.922334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.922405 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.947056 1078428 cri.go:89] found id: ""
	I1210 07:52:54.947080 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.947089 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.947095 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.947155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.972572 1078428 cri.go:89] found id: ""
	I1210 07:52:54.972599 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.972608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.972614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.972675 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.997657 1078428 cri.go:89] found id: ""
	I1210 07:52:54.997685 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.997694 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.997700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.997777 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:55.025796 1078428 cri.go:89] found id: ""
	I1210 07:52:55.025819 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.025829 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:55.025835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:55.026185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:55.069593 1078428 cri.go:89] found id: ""
	I1210 07:52:55.069631 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.069640 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:55.069649 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:55.069662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:55.135748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:55.135788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:55.151784 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:55.151815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:55.220457 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:55.220480 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:55.220495 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:55.245834 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:55.245869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.774707 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:57.785110 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:57.785178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:57.810275 1078428 cri.go:89] found id: ""
	I1210 07:52:57.810302 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.810320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:57.810328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:57.810389 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.838839 1078428 cri.go:89] found id: ""
	I1210 07:52:57.838862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.838871 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.838877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.838937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.863185 1078428 cri.go:89] found id: ""
	I1210 07:52:57.863212 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.863221 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.863227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.863287 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.890204 1078428 cri.go:89] found id: ""
	I1210 07:52:57.890234 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.890244 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.890250 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.890314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.916593 1078428 cri.go:89] found id: ""
	I1210 07:52:57.916616 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.916624 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.916630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.916690 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.940351 1078428 cri.go:89] found id: ""
	I1210 07:52:57.940373 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.940381 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.940387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.940448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.965417 1078428 cri.go:89] found id: ""
	I1210 07:52:57.965453 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.965462 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.965469 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:57.965535 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:57.989157 1078428 cri.go:89] found id: ""
	I1210 07:52:57.989183 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.989192 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:57.989202 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:57.989213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:58.015326 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:58.015366 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:58.055222 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:58.055248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:58.115866 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:58.115945 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:58.131823 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:58.131852 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:58.196880 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:57.402101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:57.460754 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:57.460865 1077343 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:52:57.554262 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:59.503589 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:59.554549 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:59.576553 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:59.576655 1077343 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:59.579701 1077343 out.go:179] * Enabled addons: 
	I1210 07:52:59.582536 1077343 addons.go:530] duration metric: took 1m41.60352286s for enable addons: enabled=[]
	I1210 07:53:00.697148 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:00.707593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:00.707661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:00.735938 1078428 cri.go:89] found id: ""
	I1210 07:53:00.735962 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.735971 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:00.735977 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:00.736039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:00.759785 1078428 cri.go:89] found id: ""
	I1210 07:53:00.759808 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.759817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:00.759823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:00.759887 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:00.784529 1078428 cri.go:89] found id: ""
	I1210 07:53:00.784552 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.784561 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:00.784567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:00.784641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.813420 1078428 cri.go:89] found id: ""
	I1210 07:53:00.813443 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.813452 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.813459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.813518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.838413 1078428 cri.go:89] found id: ""
	I1210 07:53:00.838439 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.838449 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.838455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.838559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.862923 1078428 cri.go:89] found id: ""
	I1210 07:53:00.862949 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.862968 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.862975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.863034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.890339 1078428 cri.go:89] found id: ""
	I1210 07:53:00.890366 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.890375 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.890381 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:00.890440 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:00.916963 1078428 cri.go:89] found id: ""
	I1210 07:53:00.916992 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.917001 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:00.917010 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.917022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.972565 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.972601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:00.990064 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.990154 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:01.068497 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:01.068521 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:01.068534 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:01.097602 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:01.097641 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.628666 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.639440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.639518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.664498 1078428 cri.go:89] found id: ""
	I1210 07:53:03.664523 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.664531 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.664538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.664601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.688357 1078428 cri.go:89] found id: ""
	I1210 07:53:03.688382 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.688391 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.688397 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.688460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.712874 1078428 cri.go:89] found id: ""
	I1210 07:53:03.712898 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.712906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.712913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.712990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.737610 1078428 cri.go:89] found id: ""
	I1210 07:53:03.737635 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.737643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.737650 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.737712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.762668 1078428 cri.go:89] found id: ""
	I1210 07:53:03.762695 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.762703 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.762710 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.762769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.795710 1078428 cri.go:89] found id: ""
	I1210 07:53:03.795732 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.795741 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.795747 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.795809 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.819247 1078428 cri.go:89] found id: ""
	I1210 07:53:03.819275 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.819285 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.819291 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:03.819355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:03.842854 1078428 cri.go:89] found id: ""
	I1210 07:53:03.842881 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.842891 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:03.842900 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.842911 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.858681 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.858748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.922352 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.922383 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:03.922401 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:03.948481 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.948520 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.977218 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.977247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:02.054010 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:04.555038 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:06.532410 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.544357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.544451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.576472 1078428 cri.go:89] found id: ""
	I1210 07:53:06.576500 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.576511 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.576517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.576581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.609024 1078428 cri.go:89] found id: ""
	I1210 07:53:06.609051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.609061 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.609067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.609134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.636182 1078428 cri.go:89] found id: ""
	I1210 07:53:06.636209 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.636218 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.636224 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.636286 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.664610 1078428 cri.go:89] found id: ""
	I1210 07:53:06.664677 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.664699 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.664720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.664812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.690522 1078428 cri.go:89] found id: ""
	I1210 07:53:06.690548 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.690557 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.690564 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.690626 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.716006 1078428 cri.go:89] found id: ""
	I1210 07:53:06.716035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.716044 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.716050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.716115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.740705 1078428 cri.go:89] found id: ""
	I1210 07:53:06.740726 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.740734 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.740741 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:06.740803 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:06.764831 1078428 cri.go:89] found id: ""
	I1210 07:53:06.764852 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.764860 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:06.764869 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.764881 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.820337 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.820372 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:06.836899 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.836931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.902143 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.902164 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:06.902178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:06.927253 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.927289 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.458854 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:09.469382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:09.469466 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.494769 1078428 cri.go:89] found id: ""
	I1210 07:53:09.494791 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.494799 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.494805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.494866 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1210 07:53:07.053986 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:09.554520 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:09.520347 1078428 cri.go:89] found id: ""
	I1210 07:53:09.520374 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.520383 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.520390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.520454 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.549983 1078428 cri.go:89] found id: ""
	I1210 07:53:09.550010 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.550019 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.550025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.550085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.588794 1078428 cri.go:89] found id: ""
	I1210 07:53:09.588821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.588830 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.588836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.588895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.617370 1078428 cri.go:89] found id: ""
	I1210 07:53:09.617393 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.617401 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.617407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.617465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.645730 1078428 cri.go:89] found id: ""
	I1210 07:53:09.645755 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.645779 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.645786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.645850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.672062 1078428 cri.go:89] found id: ""
	I1210 07:53:09.672088 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.672097 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.672103 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:09.672174 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:09.695770 1078428 cri.go:89] found id: ""
	I1210 07:53:09.695793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.695802 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:09.695811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:09.695822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:09.721144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.721180 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.748337 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.748367 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.802348 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.802384 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.818196 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.818226 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.884770 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.385627 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:12.396288 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:12.396367 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:12.421158 1078428 cri.go:89] found id: ""
	I1210 07:53:12.421194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.421204 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:12.421210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:12.421281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:12.446171 1078428 cri.go:89] found id: ""
	I1210 07:53:12.446206 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.446216 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:12.446222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:12.446294 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.470791 1078428 cri.go:89] found id: ""
	I1210 07:53:12.470818 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.470828 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.470836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.470895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.499441 1078428 cri.go:89] found id: ""
	I1210 07:53:12.499467 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.499476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.499483 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.499561 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.524188 1078428 cri.go:89] found id: ""
	I1210 07:53:12.524211 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.524219 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.524225 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.524285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.550501 1078428 cri.go:89] found id: ""
	I1210 07:53:12.550528 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.550537 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.550543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.550617 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.578576 1078428 cri.go:89] found id: ""
	I1210 07:53:12.578602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.578611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.578616 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:12.578687 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:12.612078 1078428 cri.go:89] found id: ""
	I1210 07:53:12.612113 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.612122 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:12.612132 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.612144 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:12.645096 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.645125 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.700179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.700217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.715578 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.715606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.781369 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.781391 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:12.781403 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:53:11.554633 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:14.054508 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:15.306176 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:15.317232 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:15.317315 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:15.336640 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:53:15.353595 1078428 cri.go:89] found id: ""
	I1210 07:53:15.353626 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.353635 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:15.353642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:15.353703 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1210 07:53:15.421893 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:15.421994 1078428 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:15.422157 1078428 cri.go:89] found id: ""
	I1210 07:53:15.422177 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.422185 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:15.422192 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:15.422270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:15.447660 1078428 cri.go:89] found id: ""
	I1210 07:53:15.447684 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.447693 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:15.447699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:15.447763 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:15.471893 1078428 cri.go:89] found id: ""
	I1210 07:53:15.471918 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.471927 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:15.471934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:15.472003 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.496880 1078428 cri.go:89] found id: ""
	I1210 07:53:15.496915 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.496924 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.496930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.496999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.525007 1078428 cri.go:89] found id: ""
	I1210 07:53:15.525043 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.525055 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.525061 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.525138 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.556732 1078428 cri.go:89] found id: ""
	I1210 07:53:15.556776 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.556785 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.556792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:15.556864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:15.592802 1078428 cri.go:89] found id: ""
	I1210 07:53:15.592835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.592844 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:15.592854 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.592866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.660809 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.660846 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.677009 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.677040 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.743204 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.743227 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:15.743239 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:15.768020 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.768053 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:18.297028 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:18.310128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:18.310198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:18.340476 1078428 cri.go:89] found id: ""
	I1210 07:53:18.340572 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.340599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:18.340642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:18.340769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:18.369516 1078428 cri.go:89] found id: ""
	I1210 07:53:18.369582 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.369614 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:18.369633 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:18.369753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:18.396295 1078428 cri.go:89] found id: ""
	I1210 07:53:18.396321 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.396330 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:18.396336 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:18.396428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:18.422012 1078428 cri.go:89] found id: ""
	I1210 07:53:18.422037 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.422046 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:18.422052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:18.422164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:18.446495 1078428 cri.go:89] found id: ""
	I1210 07:53:18.446518 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.446526 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:18.446532 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:18.446600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:18.471650 1078428 cri.go:89] found id: ""
	I1210 07:53:18.471674 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.471682 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:18.471688 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:18.471779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.495591 1078428 cri.go:89] found id: ""
	I1210 07:53:18.495616 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.495624 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.495631 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:18.495694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:18.523464 1078428 cri.go:89] found id: ""
	I1210 07:53:18.523489 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.523497 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:18.523506 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.523518 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.585434 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.585481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.610315 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.610344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.674572 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.674593 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:18.674607 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:18.699401 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.699435 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:19.389521 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:53:19.452005 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:19.452105 1078428 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:19.455408 1078428 out.go:179] * Enabled addons: 
	I1210 07:53:19.458237 1078428 addons.go:530] duration metric: took 1m57.316864384s for enable addons: enabled=[]
	W1210 07:53:16.054718 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:18.554815 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:21.227168 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:21.237506 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:21.237577 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:21.261812 1078428 cri.go:89] found id: ""
	I1210 07:53:21.261842 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.261852 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:21.261858 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:21.261921 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:21.289741 1078428 cri.go:89] found id: ""
	I1210 07:53:21.289767 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.289787 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:21.289794 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:21.289855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:21.331373 1078428 cri.go:89] found id: ""
	I1210 07:53:21.331400 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.331410 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:21.331415 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:21.331534 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:21.364401 1078428 cri.go:89] found id: ""
	I1210 07:53:21.364427 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.364436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:21.364443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:21.364504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:21.395936 1078428 cri.go:89] found id: ""
	I1210 07:53:21.395965 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.395975 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:21.395981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:21.396044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:21.420965 1078428 cri.go:89] found id: ""
	I1210 07:53:21.420996 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.421005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:21.421012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:21.421073 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:21.446318 1078428 cri.go:89] found id: ""
	I1210 07:53:21.446345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.446354 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:21.446360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:21.446422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:21.475470 1078428 cri.go:89] found id: ""
	I1210 07:53:21.475499 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.475509 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:21.475521 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:21.475537 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.530313 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.530354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.548651 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.548737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.632055 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.632137 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:21.632157 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:21.659428 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.659466 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.192421 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:24.203056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:24.203137 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:24.232457 1078428 cri.go:89] found id: ""
	I1210 07:53:24.232493 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.232502 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:24.232509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:24.232576 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:24.260730 1078428 cri.go:89] found id: ""
	I1210 07:53:24.260758 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.260768 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:24.260774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:24.260837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:24.284981 1078428 cri.go:89] found id: ""
	I1210 07:53:24.285009 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.285018 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:24.285024 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:24.285086 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:24.316578 1078428 cri.go:89] found id: ""
	I1210 07:53:24.316604 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.316613 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:24.316619 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:24.316678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:24.353587 1078428 cri.go:89] found id: ""
	I1210 07:53:24.353622 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.353638 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:24.353645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:24.353740 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:24.384460 1078428 cri.go:89] found id: ""
	I1210 07:53:24.384483 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.384492 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:24.384498 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:24.384562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:24.414252 1078428 cri.go:89] found id: ""
	I1210 07:53:24.414280 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.414290 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:24.414296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:24.414361 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:24.442225 1078428 cri.go:89] found id: ""
	I1210 07:53:24.442247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.442256 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:24.442265 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:24.442276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:24.467596 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.467629 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:21.054852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:23.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:24.499949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.499977 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.558185 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.558223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:24.576232 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:24.576264 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:24.646699 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.148382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:27.158984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:27.159102 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:27.183857 1078428 cri.go:89] found id: ""
	I1210 07:53:27.183927 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.183943 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:27.183951 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:27.184028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:27.207461 1078428 cri.go:89] found id: ""
	I1210 07:53:27.207529 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.207554 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:27.207568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:27.207645 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:27.234849 1078428 cri.go:89] found id: ""
	I1210 07:53:27.234876 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.234884 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:27.234890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:27.234948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:27.258887 1078428 cri.go:89] found id: ""
	I1210 07:53:27.258910 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.258919 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:27.258926 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:27.258983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:27.283113 1078428 cri.go:89] found id: ""
	I1210 07:53:27.283189 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.283206 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:27.283214 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:27.283283 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:27.324968 1078428 cri.go:89] found id: ""
	I1210 07:53:27.324994 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.325004 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:27.325010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:27.325070 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:27.355711 1078428 cri.go:89] found id: ""
	I1210 07:53:27.355739 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.355749 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:27.355755 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:27.355817 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:27.383387 1078428 cri.go:89] found id: ""
	I1210 07:53:27.383424 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.383435 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:27.383445 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:27.383456 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:27.408324 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:27.408363 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:27.438348 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:27.438424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:27.496282 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:27.496317 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:27.512354 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:27.512385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.586988 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:53:26.054246 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:28.554092 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:30.088030 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:30.100373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:30.100449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:30.127922 1078428 cri.go:89] found id: ""
	I1210 07:53:30.127998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.128023 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:30.128041 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:30.128120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:30.160672 1078428 cri.go:89] found id: ""
	I1210 07:53:30.160699 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.160709 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:30.160722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:30.160784 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:30.186050 1078428 cri.go:89] found id: ""
	I1210 07:53:30.186077 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.186086 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:30.186093 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:30.186157 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:30.211107 1078428 cri.go:89] found id: ""
	I1210 07:53:30.211132 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.211141 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:30.211147 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:30.211213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:30.235571 1078428 cri.go:89] found id: ""
	I1210 07:53:30.235598 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.235608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:30.235615 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:30.235678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:30.264308 1078428 cri.go:89] found id: ""
	I1210 07:53:30.264331 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.264339 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:30.264346 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:30.264413 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:30.288489 1078428 cri.go:89] found id: ""
	I1210 07:53:30.288557 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.288581 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:30.288594 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:30.288673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:30.318600 1078428 cri.go:89] found id: ""
	I1210 07:53:30.318628 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.318638 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:30.318648 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:30.318679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:30.359074 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:30.359103 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.417146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.417182 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:30.432931 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:30.432960 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:30.497452 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:30.497474 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:30.497487 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.027579 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:33.038128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:33.038197 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:33.063535 1078428 cri.go:89] found id: ""
	I1210 07:53:33.063560 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.063572 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:33.063578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:33.063642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:33.087384 1078428 cri.go:89] found id: ""
	I1210 07:53:33.087406 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.087414 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:33.087420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:33.087478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:33.112186 1078428 cri.go:89] found id: ""
	I1210 07:53:33.112247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.112258 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:33.112265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:33.112326 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:33.136102 1078428 cri.go:89] found id: ""
	I1210 07:53:33.136125 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.136133 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:33.136139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:33.136202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:33.160865 1078428 cri.go:89] found id: ""
	I1210 07:53:33.160931 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.160957 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:33.160986 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:33.161071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:33.185964 1078428 cri.go:89] found id: ""
	I1210 07:53:33.186031 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.186054 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:33.186075 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:33.186150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:33.211060 1078428 cri.go:89] found id: ""
	I1210 07:53:33.211086 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.211095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:33.211100 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:33.211180 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:33.236111 1078428 cri.go:89] found id: ""
	I1210 07:53:33.236180 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.236213 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:33.236227 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.236251 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:33.252003 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:33.252029 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:33.315902 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:33.315967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:33.316003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.342524 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:33.342604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:33.377391 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:33.377419 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:30.554186 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:33.054061 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:35.054801 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:35.933860 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.945070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.945142 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.971394 1078428 cri.go:89] found id: ""
	I1210 07:53:35.971423 1078428 logs.go:282] 0 containers: []
	W1210 07:53:35.971432 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.971438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.971501 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:36.005170 1078428 cri.go:89] found id: ""
	I1210 07:53:36.005227 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.005240 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:36.005248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:36.005329 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:36.035275 1078428 cri.go:89] found id: ""
	I1210 07:53:36.035299 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.035307 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:36.035313 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:36.035380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:36.060232 1078428 cri.go:89] found id: ""
	I1210 07:53:36.060255 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.060266 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:36.060272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:36.060336 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:36.084825 1078428 cri.go:89] found id: ""
	I1210 07:53:36.084850 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.084859 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:36.084866 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:36.084955 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:36.110606 1078428 cri.go:89] found id: ""
	I1210 07:53:36.110630 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.110639 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:36.110664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:36.110728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:36.139205 1078428 cri.go:89] found id: ""
	I1210 07:53:36.139232 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.139241 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:36.139248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:36.139358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:36.165255 1078428 cri.go:89] found id: ""
	I1210 07:53:36.165279 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.165287 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:36.165296 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:36.165308 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:36.190967 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:36.191003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:36.228036 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:36.228070 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:36.283588 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:36.283626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:36.308631 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:36.308660 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:36.382721 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.882925 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.893611 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.893738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.919385 1078428 cri.go:89] found id: ""
	I1210 07:53:38.919418 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.919427 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.919433 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.919504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.943787 1078428 cri.go:89] found id: ""
	I1210 07:53:38.943814 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.943824 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.943832 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.943896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.968361 1078428 cri.go:89] found id: ""
	I1210 07:53:38.968433 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.968451 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.968458 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.968520 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.995636 1078428 cri.go:89] found id: ""
	I1210 07:53:38.995661 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.995670 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.995677 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.995754 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:39.021416 1078428 cri.go:89] found id: ""
	I1210 07:53:39.021452 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.021462 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:39.021470 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:39.021552 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:39.048415 1078428 cri.go:89] found id: ""
	I1210 07:53:39.048441 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.048450 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:39.048456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:39.048545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:39.074528 1078428 cri.go:89] found id: ""
	I1210 07:53:39.074554 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.074563 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:39.074569 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:39.074633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:39.099525 1078428 cri.go:89] found id: ""
	I1210 07:53:39.099551 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.099571 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:39.099581 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:39.099594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:39.166056 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:39.166080 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:39.166094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:39.191445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:39.191482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:39.221901 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:39.221931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:39.276698 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:39.276735 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:53:37.554212 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:40.054014 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:41.793231 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.806351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.806419 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.833486 1078428 cri.go:89] found id: ""
	I1210 07:53:41.833508 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.833517 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.833523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.833587 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.863627 1078428 cri.go:89] found id: ""
	I1210 07:53:41.863650 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.863659 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.863665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.863723 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.891468 1078428 cri.go:89] found id: ""
	I1210 07:53:41.891492 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.891502 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.891509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.891575 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.916517 1078428 cri.go:89] found id: ""
	I1210 07:53:41.916542 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.916550 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.916557 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.916616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.942528 1078428 cri.go:89] found id: ""
	I1210 07:53:41.942555 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.942577 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.942584 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.942646 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.966600 1078428 cri.go:89] found id: ""
	I1210 07:53:41.966624 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.966633 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.966639 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.966707 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.990797 1078428 cri.go:89] found id: ""
	I1210 07:53:41.990831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.990840 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.990846 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:41.990914 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:42.024121 1078428 cri.go:89] found id: ""
	I1210 07:53:42.024148 1078428 logs.go:282] 0 containers: []
	W1210 07:53:42.024158 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:42.024169 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:42.024181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:42.080753 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:42.080799 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:42.098930 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:42.098965 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:42.176005 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:42.176075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:42.176108 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:42.205998 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:42.206045 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:42.054513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:44.553993 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:44.740690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.751788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.751908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.777536 1078428 cri.go:89] found id: ""
	I1210 07:53:44.777563 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.777571 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.777578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.777640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.805133 1078428 cri.go:89] found id: ""
	I1210 07:53:44.805161 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.805170 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.805176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.805237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.842340 1078428 cri.go:89] found id: ""
	I1210 07:53:44.842368 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.842383 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.842390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.842451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.875009 1078428 cri.go:89] found id: ""
	I1210 07:53:44.875035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.875044 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.875050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.875144 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.900854 1078428 cri.go:89] found id: ""
	I1210 07:53:44.900880 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.900889 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.900895 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.900993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.926168 1078428 cri.go:89] found id: ""
	I1210 07:53:44.926194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.926203 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.926210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.926302 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.951565 1078428 cri.go:89] found id: ""
	I1210 07:53:44.951590 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.951599 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.951605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:44.951700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:44.981123 1078428 cri.go:89] found id: ""
	I1210 07:53:44.981151 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.981160 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:44.981170 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.981181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:45.061176 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:45.061213 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:45.061227 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:45.119245 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:45.119283 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:45.172398 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:45.172430 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:45.255583 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:45.255726 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.779428 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.790537 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.790611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.831579 1078428 cri.go:89] found id: ""
	I1210 07:53:47.831602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.831610 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.831617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.831677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.859808 1078428 cri.go:89] found id: ""
	I1210 07:53:47.859835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.859844 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.859850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.859916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.885720 1078428 cri.go:89] found id: ""
	I1210 07:53:47.885745 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.885754 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.885761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.885829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.910568 1078428 cri.go:89] found id: ""
	I1210 07:53:47.910594 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.910604 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.910610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.910668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.934447 1078428 cri.go:89] found id: ""
	I1210 07:53:47.934495 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.934505 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.934511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.934571 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.959745 1078428 cri.go:89] found id: ""
	I1210 07:53:47.959772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.959782 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.959788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.959871 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.984059 1078428 cri.go:89] found id: ""
	I1210 07:53:47.984085 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.984095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.984102 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:47.984163 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:48.011978 1078428 cri.go:89] found id: ""
	I1210 07:53:48.012007 1078428 logs.go:282] 0 containers: []
	W1210 07:53:48.012018 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:48.012030 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:48.012043 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:48.069700 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:48.069738 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:48.086303 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:48.086345 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:48.160973 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:48.160994 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:48.161008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:48.185832 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:48.185868 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:46.554777 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:49.054179 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:50.713469 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.724372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.724452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.750268 1078428 cri.go:89] found id: ""
	I1210 07:53:50.750292 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.750300 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.750306 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.750368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.776624 1078428 cri.go:89] found id: ""
	I1210 07:53:50.776689 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.776704 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.776711 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.776769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.807024 1078428 cri.go:89] found id: ""
	I1210 07:53:50.807051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.807060 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.807070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.807127 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.851753 1078428 cri.go:89] found id: ""
	I1210 07:53:50.851831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.851855 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.851879 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.852000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.878419 1078428 cri.go:89] found id: ""
	I1210 07:53:50.878571 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.878589 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.878597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.878667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.904710 1078428 cri.go:89] found id: ""
	I1210 07:53:50.904741 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.904750 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.904756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.904819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.929368 1078428 cri.go:89] found id: ""
	I1210 07:53:50.929398 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.929421 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.929428 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:50.929495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:50.956973 1078428 cri.go:89] found id: ""
	I1210 07:53:50.956998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.957006 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:50.957016 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:50.957028 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:50.982743 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.982778 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:51.015675 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:51.015706 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:51.072656 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:51.072697 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:51.089028 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:51.089115 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:51.156089 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.657305 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.668282 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.668364 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.693314 1078428 cri.go:89] found id: ""
	I1210 07:53:53.693340 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.693349 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.693356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.693417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.718128 1078428 cri.go:89] found id: ""
	I1210 07:53:53.718154 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.718169 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.718176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.718234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.744359 1078428 cri.go:89] found id: ""
	I1210 07:53:53.744397 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.744406 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.744412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.744485 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.773658 1078428 cri.go:89] found id: ""
	I1210 07:53:53.773737 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.773760 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.773782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.773879 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.804702 1078428 cri.go:89] found id: ""
	I1210 07:53:53.804772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.804796 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.804815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.804905 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.840639 1078428 cri.go:89] found id: ""
	I1210 07:53:53.840706 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.840730 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.840753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.840846 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.869303 1078428 cri.go:89] found id: ""
	I1210 07:53:53.869373 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.869397 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.869419 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:53.869508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:53.898651 1078428 cri.go:89] found id: ""
	I1210 07:53:53.898742 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.898764 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:53.898787 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:53.898821 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:53.924144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.924181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.953086 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.953118 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:54.008451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:54.008555 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:54.027281 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:54.027312 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:54.091065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:53:51.054819 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:53.554121 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:56.591259 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.602391 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.602493 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.627566 1078428 cri.go:89] found id: ""
	I1210 07:53:56.627597 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.627607 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.627614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.627677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.654900 1078428 cri.go:89] found id: ""
	I1210 07:53:56.654928 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.654937 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.654944 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.655007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.679562 1078428 cri.go:89] found id: ""
	I1210 07:53:56.679592 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.679606 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.679612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.679737 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.703320 1078428 cri.go:89] found id: ""
	I1210 07:53:56.703345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.703355 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.703361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.703420 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.731538 1078428 cri.go:89] found id: ""
	I1210 07:53:56.731564 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.731573 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.731579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.731664 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.756416 1078428 cri.go:89] found id: ""
	I1210 07:53:56.756442 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.756451 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.756457 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.756523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.785074 1078428 cri.go:89] found id: ""
	I1210 07:53:56.785097 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.785106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.785111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:56.785171 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:56.815793 1078428 cri.go:89] found id: ""
	I1210 07:53:56.815821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.815831 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:56.815842 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.815856 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.834351 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.834380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.907823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.907857 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:56.907871 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:56.933197 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.933233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.964346 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.964378 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:55.554659 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:58.054078 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:00.054143 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:59.520946 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.531324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.531414 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.563870 1078428 cri.go:89] found id: ""
	I1210 07:53:59.563897 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.563907 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.563913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.564000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.593355 1078428 cri.go:89] found id: ""
	I1210 07:53:59.593385 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.593394 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.593400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.593468 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.620235 1078428 cri.go:89] found id: ""
	I1210 07:53:59.620263 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.620272 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.620278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.620338 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.645074 1078428 cri.go:89] found id: ""
	I1210 07:53:59.645099 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.645108 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.645114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.645178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.673804 1078428 cri.go:89] found id: ""
	I1210 07:53:59.673830 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.673839 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.673845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.673902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.697766 1078428 cri.go:89] found id: ""
	I1210 07:53:59.697793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.697803 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.697810 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.697868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.725582 1078428 cri.go:89] found id: ""
	I1210 07:53:59.725608 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.725617 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.725623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:59.725681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:59.750402 1078428 cri.go:89] found id: ""
	I1210 07:53:59.750428 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.750437 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:59.750447 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:59.750458 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:59.775346 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.775383 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.815776 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.815804 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.876120 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.876164 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.897440 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.897470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.962486 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.463154 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.473950 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.474039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.498884 1078428 cri.go:89] found id: ""
	I1210 07:54:02.498907 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.498916 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.498923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.498982 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.523553 1078428 cri.go:89] found id: ""
	I1210 07:54:02.523582 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.523591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.523597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.552876 1078428 cri.go:89] found id: ""
	I1210 07:54:02.552902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.552911 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.552918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.552976 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.583793 1078428 cri.go:89] found id: ""
	I1210 07:54:02.583818 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.583827 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.583833 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.583895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.625932 1078428 cri.go:89] found id: ""
	I1210 07:54:02.625959 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.625969 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.625976 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.626044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.652709 1078428 cri.go:89] found id: ""
	I1210 07:54:02.652784 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.652800 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.652808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.652868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.680830 1078428 cri.go:89] found id: ""
	I1210 07:54:02.680859 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.680868 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.680874 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:02.680933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:02.706663 1078428 cri.go:89] found id: ""
	I1210 07:54:02.706687 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.706696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:02.706704 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.706715 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.763069 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.763105 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.779309 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.779340 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.864302 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.864326 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:02.864339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:02.890235 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.890274 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:02.554570 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:04.555006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:05.418128 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.429523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.429604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.456726 1078428 cri.go:89] found id: ""
	I1210 07:54:05.456755 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.456765 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.456772 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.456851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.485039 1078428 cri.go:89] found id: ""
	I1210 07:54:05.485065 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.485074 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.485080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.485169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.510634 1078428 cri.go:89] found id: ""
	I1210 07:54:05.510658 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.510668 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.510674 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.510733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.536710 1078428 cri.go:89] found id: ""
	I1210 07:54:05.536743 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.536753 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.536760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.536848 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.568911 1078428 cri.go:89] found id: ""
	I1210 07:54:05.568991 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.569015 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.569040 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.569150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.598888 1078428 cri.go:89] found id: ""
	I1210 07:54:05.598964 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.598987 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.599007 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.599101 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.630665 1078428 cri.go:89] found id: ""
	I1210 07:54:05.630741 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.630771 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.630779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:05.630850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:05.654676 1078428 cri.go:89] found id: ""
	I1210 07:54:05.654702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.654712 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:05.654722 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.654733 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.712685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.712722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.728743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.728774 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.807287 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.807311 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:05.807325 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:05.835209 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.835246 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.367017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.377830 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.377904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.402753 1078428 cri.go:89] found id: ""
	I1210 07:54:08.402778 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.402787 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.402795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.402856 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.427920 1078428 cri.go:89] found id: ""
	I1210 07:54:08.427947 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.427956 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.427963 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.428021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.453012 1078428 cri.go:89] found id: ""
	I1210 07:54:08.453037 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.453045 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.453052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.453114 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.477565 1078428 cri.go:89] found id: ""
	I1210 07:54:08.477591 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.477606 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.477612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.477673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.501669 1078428 cri.go:89] found id: ""
	I1210 07:54:08.501694 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.501740 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.501750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.501816 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.530594 1078428 cri.go:89] found id: ""
	I1210 07:54:08.530667 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.530704 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.530719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.530799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.561145 1078428 cri.go:89] found id: ""
	I1210 07:54:08.561171 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.561179 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.561186 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:08.561244 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:08.595663 1078428 cri.go:89] found id: ""
	I1210 07:54:08.595686 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.595695 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:08.595706 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:08.595718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:08.622963 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.623002 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.652801 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.652829 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.708272 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.708307 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.724144 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.724174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.790000 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:07.054035 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:09.054348 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:11.291584 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.302037 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.302111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.331607 1078428 cri.go:89] found id: ""
	I1210 07:54:11.331631 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.331640 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.331646 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.331711 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.355008 1078428 cri.go:89] found id: ""
	I1210 07:54:11.355031 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.355039 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.355045 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.355104 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.380347 1078428 cri.go:89] found id: ""
	I1210 07:54:11.380423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.380463 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.380485 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.380572 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.410797 1078428 cri.go:89] found id: ""
	I1210 07:54:11.410824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.410834 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.410840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.410898 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.435927 1078428 cri.go:89] found id: ""
	I1210 07:54:11.435996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.436021 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.436035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.436109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.461484 1078428 cri.go:89] found id: ""
	I1210 07:54:11.461520 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.461529 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.461536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.461603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.486793 1078428 cri.go:89] found id: ""
	I1210 07:54:11.486817 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.486825 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.486831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:11.486890 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:11.515338 1078428 cri.go:89] found id: ""
	I1210 07:54:11.515364 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.515374 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:11.515384 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.515396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.593473 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.593495 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:11.593509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:11.619492 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.619523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.646739 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.646771 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.701149 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.701187 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.217342 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:14.228228 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:14.228306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.254323 1078428 cri.go:89] found id: ""
	I1210 07:54:14.254360 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.254369 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.254375 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.254443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.279268 1078428 cri.go:89] found id: ""
	I1210 07:54:14.279295 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.279303 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.279310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.279397 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.304531 1078428 cri.go:89] found id: ""
	I1210 07:54:14.304558 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.304567 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.304574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.304647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.329458 1078428 cri.go:89] found id: ""
	I1210 07:54:14.329487 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.329496 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.329502 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.329563 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.359168 1078428 cri.go:89] found id: ""
	I1210 07:54:14.359241 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.359258 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.359266 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.359348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.386391 1078428 cri.go:89] found id: ""
	I1210 07:54:14.386426 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.386435 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.386442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.386540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.411808 1078428 cri.go:89] found id: ""
	I1210 07:54:14.411843 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.411862 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.411870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:14.411946 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:14.440262 1078428 cri.go:89] found id: ""
	I1210 07:54:14.440292 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.440301 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:14.440311 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.440322 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:54:11.553952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:13.554999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:14.496340 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.496376 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.512934 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.512963 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.584969 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.585042 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:14.585069 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:14.615045 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.615086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.146612 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:17.157236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:17.157307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:17.184080 1078428 cri.go:89] found id: ""
	I1210 07:54:17.184102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.184111 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:17.184117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:17.184177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:17.212720 1078428 cri.go:89] found id: ""
	I1210 07:54:17.212745 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.212754 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:17.212760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:17.212822 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.238495 1078428 cri.go:89] found id: ""
	I1210 07:54:17.238521 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.238529 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.238542 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.238603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.262892 1078428 cri.go:89] found id: ""
	I1210 07:54:17.262921 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.262930 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.262936 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.262996 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.291473 1078428 cri.go:89] found id: ""
	I1210 07:54:17.291498 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.291508 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.291514 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.291573 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.317108 1078428 cri.go:89] found id: ""
	I1210 07:54:17.317133 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.317142 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.317149 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.317209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.344918 1078428 cri.go:89] found id: ""
	I1210 07:54:17.344944 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.344953 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.344959 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:17.345019 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:17.370082 1078428 cri.go:89] found id: ""
	I1210 07:54:17.370109 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.370118 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:17.370128 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.370139 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:17.427357 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.427407 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.443363 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.443393 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.509516 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.509538 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:17.509551 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:17.535043 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.535078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:16.053965 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:18.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:20.071194 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:20.083928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:20.084059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:20.119958 1078428 cri.go:89] found id: ""
	I1210 07:54:20.119987 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.119996 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:20.120002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:20.120062 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:20.144861 1078428 cri.go:89] found id: ""
	I1210 07:54:20.144883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.144891 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:20.144897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:20.144957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:20.180042 1078428 cri.go:89] found id: ""
	I1210 07:54:20.180069 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.180078 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:20.180085 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:20.180151 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.208390 1078428 cri.go:89] found id: ""
	I1210 07:54:20.208423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.208432 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.208439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.208511 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.234337 1078428 cri.go:89] found id: ""
	I1210 07:54:20.234358 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.234367 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.234373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.234441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.263116 1078428 cri.go:89] found id: ""
	I1210 07:54:20.263138 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.263146 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.263153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.263213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.287115 1078428 cri.go:89] found id: ""
	I1210 07:54:20.287188 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.287203 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.287210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:20.287281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:20.312391 1078428 cri.go:89] found id: ""
	I1210 07:54:20.312415 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.312423 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:20.312432 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.312443 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.369802 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.369838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.387018 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.387099 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.458731 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.458801 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:20.458828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:20.483627 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.483662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:23.014658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:23.025123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:23.025235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:23.060798 1078428 cri.go:89] found id: ""
	I1210 07:54:23.060872 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.060909 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:23.060934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:23.061025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:23.092890 1078428 cri.go:89] found id: ""
	I1210 07:54:23.092965 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.092987 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:23.093018 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:23.093129 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:23.122215 1078428 cri.go:89] found id: ""
	I1210 07:54:23.122290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.122314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:23.122335 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:23.122418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:23.147080 1078428 cri.go:89] found id: ""
	I1210 07:54:23.147108 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.147117 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:23.147123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:23.147213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.171020 1078428 cri.go:89] found id: ""
	I1210 07:54:23.171043 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.171052 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.171064 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.171120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.195821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.195889 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.195914 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.195929 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.196016 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.219821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.219901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.219926 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.219941 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:23.220025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:23.248052 1078428 cri.go:89] found id: ""
	I1210 07:54:23.248079 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.248088 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:23.248098 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.248109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.305179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.305215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.321081 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.321111 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.391528 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.391553 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:23.391565 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:23.416476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.416509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:20.554048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:22.554698 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:24.554805 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:25.951859 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.962115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.962185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.986216 1078428 cri.go:89] found id: ""
	I1210 07:54:25.986286 1078428 logs.go:282] 0 containers: []
	W1210 07:54:25.986310 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.986334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.986426 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:26.011668 1078428 cri.go:89] found id: ""
	I1210 07:54:26.011696 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.011705 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:26.011712 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:26.011773 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:26.037538 1078428 cri.go:89] found id: ""
	I1210 07:54:26.037560 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.037569 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:26.037575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:26.037634 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:26.066974 1078428 cri.go:89] found id: ""
	I1210 07:54:26.066996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.067006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:26.067013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:26.067071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:26.100870 1078428 cri.go:89] found id: ""
	I1210 07:54:26.100892 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.100901 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:26.100907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:26.100966 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.130861 1078428 cri.go:89] found id: ""
	I1210 07:54:26.130883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.130891 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.130897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.130957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.156407 1078428 cri.go:89] found id: ""
	I1210 07:54:26.156429 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.156438 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.156444 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:26.156502 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:26.182081 1078428 cri.go:89] found id: ""
	I1210 07:54:26.182102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.182110 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:26.182119 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.182133 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.239878 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.239917 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.259189 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.259219 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.328449 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.328475 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:26.328490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:26.353246 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.353278 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.882607 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.893420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.893495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.917577 1078428 cri.go:89] found id: ""
	I1210 07:54:28.917603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.917611 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.917617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.917677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.949094 1078428 cri.go:89] found id: ""
	I1210 07:54:28.949123 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.949132 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.949138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.949202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.976683 1078428 cri.go:89] found id: ""
	I1210 07:54:28.976708 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.976716 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.976722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.976783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:29.001326 1078428 cri.go:89] found id: ""
	I1210 07:54:29.001395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.001420 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:29.001440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:29.001526 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:29.026870 1078428 cri.go:89] found id: ""
	I1210 07:54:29.026894 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.026903 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:29.026909 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:29.026992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:29.059072 1078428 cri.go:89] found id: ""
	I1210 07:54:29.059106 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.059115 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:29.059122 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:29.059190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:29.089329 1078428 cri.go:89] found id: ""
	I1210 07:54:29.089363 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.089372 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:29.089379 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:29.089446 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:29.116648 1078428 cri.go:89] found id: ""
	I1210 07:54:29.116671 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.116680 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:29.116689 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:29.116701 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:29.141429 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.141465 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.168073 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.168102 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:29.223128 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:29.223165 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:29.239118 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:29.239149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.304306 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:27.054859 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:29.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:31.805827 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.819227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.819305 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.852872 1078428 cri.go:89] found id: ""
	I1210 07:54:31.852901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.852910 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.852916 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.852973 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.881145 1078428 cri.go:89] found id: ""
	I1210 07:54:31.881173 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.881182 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.881188 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.881249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.907195 1078428 cri.go:89] found id: ""
	I1210 07:54:31.907218 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.907227 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.907233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.907292 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.931775 1078428 cri.go:89] found id: ""
	I1210 07:54:31.931799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.931808 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.931814 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.931876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.957735 1078428 cri.go:89] found id: ""
	I1210 07:54:31.957764 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.957772 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.957779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.957837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.982202 1078428 cri.go:89] found id: ""
	I1210 07:54:31.982285 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.982308 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.982334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.982441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:32.011091 1078428 cri.go:89] found id: ""
	I1210 07:54:32.011119 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.011129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:32.011138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:32.011205 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:32.039293 1078428 cri.go:89] found id: ""
	I1210 07:54:32.039371 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.039388 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:32.039399 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:32.039410 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:32.067441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.067482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:32.105238 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:32.105273 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.164873 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.164913 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.181394 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.181477 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.250195 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:32.054006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:34.054566 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:34.751129 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.761490 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.761559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.785680 1078428 cri.go:89] found id: ""
	I1210 07:54:34.785702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.785711 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.785716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.785775 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.820785 1078428 cri.go:89] found id: ""
	I1210 07:54:34.820809 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.820817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.820823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.820892 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.852508 1078428 cri.go:89] found id: ""
	I1210 07:54:34.852531 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.852539 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.852545 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.852604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.879064 1078428 cri.go:89] found id: ""
	I1210 07:54:34.879095 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.879104 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.879111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.879179 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.908815 1078428 cri.go:89] found id: ""
	I1210 07:54:34.908849 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.908858 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.908864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.908933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.939793 1078428 cri.go:89] found id: ""
	I1210 07:54:34.939820 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.939831 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.939838 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.939902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.966660 1078428 cri.go:89] found id: ""
	I1210 07:54:34.966730 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.966754 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.966775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:34.966877 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:34.997175 1078428 cri.go:89] found id: ""
	I1210 07:54:34.997202 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.997211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:34.997221 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:34.997233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:35.054362 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:35.054504 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:35.071310 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:35.071339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.154263 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.154285 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:35.154298 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:35.184377 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.184427 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:37.716479 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.727384 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.727475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.758151 1078428 cri.go:89] found id: ""
	I1210 07:54:37.758175 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.758183 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.758189 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.758249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.783547 1078428 cri.go:89] found id: ""
	I1210 07:54:37.783572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.783580 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.783586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.783652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.824269 1078428 cri.go:89] found id: ""
	I1210 07:54:37.824302 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.824320 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.824326 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.824392 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.859292 1078428 cri.go:89] found id: ""
	I1210 07:54:37.859315 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.859324 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.859332 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.859391 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.887370 1078428 cri.go:89] found id: ""
	I1210 07:54:37.887395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.887404 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.887411 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.887471 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.912568 1078428 cri.go:89] found id: ""
	I1210 07:54:37.912590 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.912599 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.912605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.912667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.942226 1078428 cri.go:89] found id: ""
	I1210 07:54:37.942294 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.942321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.942341 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:37.942416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:37.967116 1078428 cri.go:89] found id: ""
	I1210 07:54:37.967186 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.967211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:37.967234 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:37.967261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.026081 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.026123 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:38.044051 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:38.044086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:38.137383 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:38.137408 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:38.137420 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:38.163137 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.163174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:36.553998 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:38.554925 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:40.692712 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.705786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:40.705862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:40.730857 1078428 cri.go:89] found id: ""
	I1210 07:54:40.730881 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.730890 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:40.730896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:40.730956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:40.759374 1078428 cri.go:89] found id: ""
	I1210 07:54:40.759401 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.759410 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:40.759417 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:40.759481 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:40.784874 1078428 cri.go:89] found id: ""
	I1210 07:54:40.784898 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.784906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:40.784912 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:40.784972 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:40.829615 1078428 cri.go:89] found id: ""
	I1210 07:54:40.829638 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.829648 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:40.829655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:40.829714 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:40.855514 1078428 cri.go:89] found id: ""
	I1210 07:54:40.855537 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.855547 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:40.855553 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:40.855622 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:40.880645 1078428 cri.go:89] found id: ""
	I1210 07:54:40.880674 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.880683 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:40.880699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:40.880762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:40.908526 1078428 cri.go:89] found id: ""
	I1210 07:54:40.908553 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.908562 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:40.908568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:40.908627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:40.933389 1078428 cri.go:89] found id: ""
	I1210 07:54:40.933417 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.933427 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:40.933466 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:40.933485 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:40.989429 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:40.989508 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:41.005657 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:41.005748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:41.093001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:41.093075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:41.093107 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:41.120941 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:41.121022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:43.650332 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:43.660886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:43.660957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:43.685546 1078428 cri.go:89] found id: ""
	I1210 07:54:43.685572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.685582 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:43.685590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:43.685652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:43.710551 1078428 cri.go:89] found id: ""
	I1210 07:54:43.710575 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.710584 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:43.710590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:43.710651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:43.735321 1078428 cri.go:89] found id: ""
	I1210 07:54:43.735347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.735357 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:43.735363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:43.735422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:43.760265 1078428 cri.go:89] found id: ""
	I1210 07:54:43.760290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.760299 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:43.760305 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:43.760371 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:43.785386 1078428 cri.go:89] found id: ""
	I1210 07:54:43.785412 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.785421 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:43.785427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:43.785491 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:43.812278 1078428 cri.go:89] found id: ""
	I1210 07:54:43.812305 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.812323 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:43.812331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:43.812390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:43.844260 1078428 cri.go:89] found id: ""
	I1210 07:54:43.844288 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.844297 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:43.844303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:43.844374 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:43.878456 1078428 cri.go:89] found id: ""
	I1210 07:54:43.878503 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.878512 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:43.878522 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:43.878533 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:43.934467 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:43.934503 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:43.951761 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:43.951790 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:44.019672 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:44.019739 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:44.019764 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:44.045374 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:44.045448 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:41.053999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:43.054974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:45.055139 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:46.583553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:46.594544 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:46.594614 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:46.620989 1078428 cri.go:89] found id: ""
	I1210 07:54:46.621016 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.621026 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:46.621032 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:46.621092 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:46.646885 1078428 cri.go:89] found id: ""
	I1210 07:54:46.646912 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.646921 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:46.646927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:46.646993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:46.671522 1078428 cri.go:89] found id: ""
	I1210 07:54:46.671545 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.671555 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:46.671561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:46.671627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:46.697035 1078428 cri.go:89] found id: ""
	I1210 07:54:46.697057 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.697066 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:46.697076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:46.697135 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:46.721985 1078428 cri.go:89] found id: ""
	I1210 07:54:46.722008 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.722016 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:46.722023 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:46.722081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:46.750862 1078428 cri.go:89] found id: ""
	I1210 07:54:46.750885 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.750894 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:46.750900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:46.750957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:46.775321 1078428 cri.go:89] found id: ""
	I1210 07:54:46.775347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.775357 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:46.775363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:46.775422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:46.804576 1078428 cri.go:89] found id: ""
	I1210 07:54:46.804603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.804612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:46.804624 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:46.804635 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:46.869024 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:46.869059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:46.887039 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:46.887068 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:46.955257 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:46.955281 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:46.955294 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:46.981722 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:46.981766 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:47.553929 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:49.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:49.512895 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:49.523585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:49.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:49.553762 1078428 cri.go:89] found id: ""
	I1210 07:54:49.553799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.553809 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:49.553815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:49.553883 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:49.584365 1078428 cri.go:89] found id: ""
	I1210 07:54:49.584397 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.584406 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:49.584412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:49.584473 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:49.609054 1078428 cri.go:89] found id: ""
	I1210 07:54:49.609078 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.609088 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:49.609094 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:49.609153 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:49.633506 1078428 cri.go:89] found id: ""
	I1210 07:54:49.633585 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.633612 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:49.633632 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:49.633727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:49.660681 1078428 cri.go:89] found id: ""
	I1210 07:54:49.660705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.660713 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:49.660719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:49.660779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:49.684429 1078428 cri.go:89] found id: ""
	I1210 07:54:49.684456 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.684465 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:49.684472 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:49.684559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:49.708792 1078428 cri.go:89] found id: ""
	I1210 07:54:49.708825 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.708834 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:49.708841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:49.708907 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:49.733028 1078428 cri.go:89] found id: ""
	I1210 07:54:49.733061 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.733070 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:49.733080 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:49.733093 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:49.788419 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:49.788454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:49.806199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:49.806229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:49.890193 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:49.890216 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:49.890229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:49.916164 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:49.916201 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.445192 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:52.455938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:52.456011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:52.483578 1078428 cri.go:89] found id: ""
	I1210 07:54:52.483607 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.483615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:52.483622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:52.483681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:52.508996 1078428 cri.go:89] found id: ""
	I1210 07:54:52.509019 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.509028 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:52.509035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:52.509100 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:52.534163 1078428 cri.go:89] found id: ""
	I1210 07:54:52.534189 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.534197 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:52.534204 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:52.534262 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:52.559446 1078428 cri.go:89] found id: ""
	I1210 07:54:52.559468 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.559476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:52.559482 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:52.559538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:52.585685 1078428 cri.go:89] found id: ""
	I1210 07:54:52.585705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.585714 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:52.585720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:52.585781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:52.610362 1078428 cri.go:89] found id: ""
	I1210 07:54:52.610387 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.610396 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:52.610429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:52.610553 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:52.639114 1078428 cri.go:89] found id: ""
	I1210 07:54:52.639140 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.639149 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:52.639155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:52.639239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:52.669083 1078428 cri.go:89] found id: ""
	I1210 07:54:52.669111 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.669120 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:52.669129 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:52.669141 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:52.684926 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:52.684953 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:52.749001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:52.749025 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:52.749037 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:52.773227 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:52.773261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.804197 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:52.804276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:54:52.054720 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:54.555065 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:55.368759 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:55.379351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:55.379439 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:55.403912 1078428 cri.go:89] found id: ""
	I1210 07:54:55.403937 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.403946 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:55.403953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:55.404021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:55.432879 1078428 cri.go:89] found id: ""
	I1210 07:54:55.432902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.432912 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:55.432918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:55.432981 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:55.457499 1078428 cri.go:89] found id: ""
	I1210 07:54:55.457528 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.457537 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:55.457546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:55.457605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:55.482796 1078428 cri.go:89] found id: ""
	I1210 07:54:55.482824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.482833 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:55.482840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:55.482900 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:55.508135 1078428 cri.go:89] found id: ""
	I1210 07:54:55.508158 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.508167 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:55.508173 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:55.508239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:55.532757 1078428 cri.go:89] found id: ""
	I1210 07:54:55.532828 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.532849 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:55.532856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:55.532923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:55.558383 1078428 cri.go:89] found id: ""
	I1210 07:54:55.558408 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.558431 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:55.558437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:55.558540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:55.584737 1078428 cri.go:89] found id: ""
	I1210 07:54:55.584768 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.584780 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:55.584790 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:55.584802 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:55.611899 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:55.611929 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:55.667940 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:55.667974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:55.683872 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:55.683902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:55.753488 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:55.753511 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:55.753523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.279433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:58.290275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:58.290358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:58.315732 1078428 cri.go:89] found id: ""
	I1210 07:54:58.315760 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.315769 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:58.315775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:58.315840 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:58.354970 1078428 cri.go:89] found id: ""
	I1210 07:54:58.354993 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.355002 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:58.355009 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:58.355080 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:58.387261 1078428 cri.go:89] found id: ""
	I1210 07:54:58.387290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.387300 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:58.387307 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:58.387366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:58.415659 1078428 cri.go:89] found id: ""
	I1210 07:54:58.415683 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.415691 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:58.415698 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:58.415762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:58.440257 1078428 cri.go:89] found id: ""
	I1210 07:54:58.440283 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.440292 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:58.440298 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:58.440380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:58.465572 1078428 cri.go:89] found id: ""
	I1210 07:54:58.465598 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.465607 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:58.465614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:58.465672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:58.490288 1078428 cri.go:89] found id: ""
	I1210 07:54:58.490313 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.490321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:58.490327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:58.490384 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:58.516549 1078428 cri.go:89] found id: ""
	I1210 07:54:58.516572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.516580 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:58.516590 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:58.516601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.542195 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:58.542234 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:58.570592 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:58.570623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:58.627983 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:58.628020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:58.644192 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:58.644218 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:58.708892 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:57.053952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:59.054069 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:01.209184 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:01.221080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:01.221155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:01.250125 1078428 cri.go:89] found id: ""
	I1210 07:55:01.250154 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.250163 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:01.250178 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:01.250240 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:01.276827 1078428 cri.go:89] found id: ""
	I1210 07:55:01.276854 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.276869 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:01.276876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:01.276938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:01.311772 1078428 cri.go:89] found id: ""
	I1210 07:55:01.311808 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.311818 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:01.311824 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:01.311894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:01.344006 1078428 cri.go:89] found id: ""
	I1210 07:55:01.344042 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.344052 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:01.344059 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:01.344131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:01.370453 1078428 cri.go:89] found id: ""
	I1210 07:55:01.370508 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.370517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:01.370524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:01.370596 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:01.396784 1078428 cri.go:89] found id: ""
	I1210 07:55:01.396811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.396833 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:01.396840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:01.396925 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:01.427026 1078428 cri.go:89] found id: ""
	I1210 07:55:01.427053 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.427064 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:01.427076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:01.427145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:01.453716 1078428 cri.go:89] found id: ""
	I1210 07:55:01.453745 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.453755 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:01.453765 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:01.453787 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:01.483021 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:01.483048 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:01.538363 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:01.538402 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:01.555879 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:01.555912 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:01.624093 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:01.624120 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:01.624136 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.151461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:04.161982 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:04.162052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:04.187914 1078428 cri.go:89] found id: ""
	I1210 07:55:04.187940 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.187955 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:04.187961 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:04.188020 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:04.212016 1078428 cri.go:89] found id: ""
	I1210 07:55:04.212039 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.212048 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:04.212054 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:04.212113 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:04.237062 1078428 cri.go:89] found id: ""
	I1210 07:55:04.237088 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.237098 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:04.237107 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:04.237166 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:04.262844 1078428 cri.go:89] found id: ""
	I1210 07:55:04.262867 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.262876 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:04.262883 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:04.262943 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:04.288099 1078428 cri.go:89] found id: ""
	I1210 07:55:04.288125 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.288134 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:04.288140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:04.288198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:04.315819 1078428 cri.go:89] found id: ""
	I1210 07:55:04.315846 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.315855 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:04.315861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:04.315923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:04.349897 1078428 cri.go:89] found id: ""
	I1210 07:55:04.349919 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.349928 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:04.349934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:04.349992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:04.374228 1078428 cri.go:89] found id: ""
	I1210 07:55:04.374255 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.374264 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:04.374274 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:04.374285 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:04.430541 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:04.430576 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:04.446913 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:04.446947 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:01.054690 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:03.054791 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:04.519646 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:04.519667 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:04.519679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.545056 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:04.545097 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:07.074592 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:07.085572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:07.085640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:07.111394 1078428 cri.go:89] found id: ""
	I1210 07:55:07.111418 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.111426 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:07.111432 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:07.111497 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:07.135823 1078428 cri.go:89] found id: ""
	I1210 07:55:07.135848 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.135857 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:07.135864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:07.135923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:07.164275 1078428 cri.go:89] found id: ""
	I1210 07:55:07.164297 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.164306 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:07.164311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:07.164385 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:07.193334 1078428 cri.go:89] found id: ""
	I1210 07:55:07.193358 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.193367 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:07.193373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:07.193429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:07.217929 1078428 cri.go:89] found id: ""
	I1210 07:55:07.217955 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.217964 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:07.217970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:07.218032 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:07.243152 1078428 cri.go:89] found id: ""
	I1210 07:55:07.243176 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.243185 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:07.243191 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:07.243251 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:07.270888 1078428 cri.go:89] found id: ""
	I1210 07:55:07.270918 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.270927 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:07.270934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:07.270992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:07.304504 1078428 cri.go:89] found id: ""
	I1210 07:55:07.304531 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.304540 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:07.304549 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:07.304561 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:07.370744 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:07.370786 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:07.386532 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:07.386606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:07.450870 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:07.450892 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:07.450906 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:07.476441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:07.476476 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:05.554590 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:08.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:10.006374 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:10.031408 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:10.031500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:10.072527 1078428 cri.go:89] found id: ""
	I1210 07:55:10.072558 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.072568 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:10.072575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:10.072637 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:10.107560 1078428 cri.go:89] found id: ""
	I1210 07:55:10.107605 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.107615 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:10.107621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:10.107694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:10.138416 1078428 cri.go:89] found id: ""
	I1210 07:55:10.138441 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.138450 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:10.138456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:10.138547 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:10.163271 1078428 cri.go:89] found id: ""
	I1210 07:55:10.163294 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.163303 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:10.163309 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:10.163372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:10.193549 1078428 cri.go:89] found id: ""
	I1210 07:55:10.193625 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.193637 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:10.193664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:10.193766 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:10.225083 1078428 cri.go:89] found id: ""
	I1210 07:55:10.225169 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.225182 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:10.225212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:10.225307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:10.251042 1078428 cri.go:89] found id: ""
	I1210 07:55:10.251067 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.251082 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:10.251089 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:10.251175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:10.275656 1078428 cri.go:89] found id: ""
	I1210 07:55:10.275681 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.275690 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:10.275699 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:10.275711 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:10.335591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:10.335628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:10.352546 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:10.352577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:10.421057 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:10.421081 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:10.421094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:10.446445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:10.446578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:12.978285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:12.988877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:12.988951 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:13.014715 1078428 cri.go:89] found id: ""
	I1210 07:55:13.014738 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.014746 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:13.014753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:13.014812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:13.039187 1078428 cri.go:89] found id: ""
	I1210 07:55:13.039217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.039226 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:13.039231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:13.039293 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:13.079663 1078428 cri.go:89] found id: ""
	I1210 07:55:13.079687 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.079696 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:13.079702 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:13.079762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:13.116097 1078428 cri.go:89] found id: ""
	I1210 07:55:13.116118 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.116127 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:13.116133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:13.116190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:13.141856 1078428 cri.go:89] found id: ""
	I1210 07:55:13.141921 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.141946 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:13.141973 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:13.142049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:13.166245 1078428 cri.go:89] found id: ""
	I1210 07:55:13.166318 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.166341 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:13.166361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:13.166452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:13.190766 1078428 cri.go:89] found id: ""
	I1210 07:55:13.190790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.190799 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:13.190805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:13.190864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:13.218179 1078428 cri.go:89] found id: ""
	I1210 07:55:13.218217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.218227 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:13.218253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:13.218270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:13.234044 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:13.234082 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:13.303134 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:13.303158 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:13.303170 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:13.330980 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:13.331017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:13.358836 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:13.358865 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:10.554264 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:13.054017 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:15.055138 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:15.922613 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:15.933295 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:15.933370 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:15.958341 1078428 cri.go:89] found id: ""
	I1210 07:55:15.958364 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.958373 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:15.958378 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:15.958434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:15.983285 1078428 cri.go:89] found id: ""
	I1210 07:55:15.983309 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.983324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:15.983330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:15.983387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:16.008789 1078428 cri.go:89] found id: ""
	I1210 07:55:16.008816 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.008825 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:16.008831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:16.008926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:16.035859 1078428 cri.go:89] found id: ""
	I1210 07:55:16.035931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.035946 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:16.035955 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:16.036022 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:16.068655 1078428 cri.go:89] found id: ""
	I1210 07:55:16.068688 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.068697 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:16.068704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:16.068776 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:16.106754 1078428 cri.go:89] found id: ""
	I1210 07:55:16.106780 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.106790 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:16.106796 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:16.106862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:16.133097 1078428 cri.go:89] found id: ""
	I1210 07:55:16.133124 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.133133 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:16.133139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:16.133207 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:16.157892 1078428 cri.go:89] found id: ""
	I1210 07:55:16.157938 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.157947 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:16.157957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:16.157970 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:16.212808 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:16.212848 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:16.228781 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:16.228813 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:16.291789 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:16.291811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:16.291823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:16.319342 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:16.319380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:18.855190 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:18.865732 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:18.865807 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:18.889830 1078428 cri.go:89] found id: ""
	I1210 07:55:18.889855 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.889864 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:18.889871 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:18.889936 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:18.914345 1078428 cri.go:89] found id: ""
	I1210 07:55:18.914370 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.914379 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:18.914385 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:18.914444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:18.939221 1078428 cri.go:89] found id: ""
	I1210 07:55:18.939243 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.939253 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:18.939258 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:18.939316 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:18.967766 1078428 cri.go:89] found id: ""
	I1210 07:55:18.967788 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.967796 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:18.967803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:18.967867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:18.996962 1078428 cri.go:89] found id: ""
	I1210 07:55:18.996984 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.996992 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:18.996999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:18.997055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:19.023004 1078428 cri.go:89] found id: ""
	I1210 07:55:19.023031 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.023043 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:19.023052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:19.023115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:19.057510 1078428 cri.go:89] found id: ""
	I1210 07:55:19.057540 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.057549 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:19.057555 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:19.057611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:19.092862 1078428 cri.go:89] found id: ""
	I1210 07:55:19.092891 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.092900 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:19.092910 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:19.092921 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:19.150597 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:19.150632 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:19.166174 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:19.166252 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:19.232235 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:19.232259 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:19.232272 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:19.256392 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:19.256424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:17.554658 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:20.054087 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:21.783358 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:21.793821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:21.793896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:21.818542 1078428 cri.go:89] found id: ""
	I1210 07:55:21.818564 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.818573 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:21.818580 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:21.818639 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:21.842392 1078428 cri.go:89] found id: ""
	I1210 07:55:21.842414 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.842423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:21.842429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:21.842509 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:21.869909 1078428 cri.go:89] found id: ""
	I1210 07:55:21.869931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.869940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:21.869947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:21.870009 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:21.896175 1078428 cri.go:89] found id: ""
	I1210 07:55:21.896197 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.896206 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:21.896212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:21.896272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:21.924596 1078428 cri.go:89] found id: ""
	I1210 07:55:21.924672 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.924684 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:21.924691 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:21.924781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:21.952789 1078428 cri.go:89] found id: ""
	I1210 07:55:21.952811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.952820 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:21.952826 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:21.952885 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:21.978579 1078428 cri.go:89] found id: ""
	I1210 07:55:21.978603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.978611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:21.978617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:21.978678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:22.002801 1078428 cri.go:89] found id: ""
	I1210 07:55:22.002829 1078428 logs.go:282] 0 containers: []
	W1210 07:55:22.002838 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:22.002848 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:22.002866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:22.021034 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:22.021067 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:22.101183 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:22.101208 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:22.101223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:22.133557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:22.133593 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:22.160692 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:22.160719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:22.554004 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:25.054003 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:24.716616 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:24.727463 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:24.727545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:24.752976 1078428 cri.go:89] found id: ""
	I1210 07:55:24.753005 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.753014 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:24.753021 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:24.753081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:24.780812 1078428 cri.go:89] found id: ""
	I1210 07:55:24.780841 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.780850 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:24.780856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:24.780913 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:24.806877 1078428 cri.go:89] found id: ""
	I1210 07:55:24.806900 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.806909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:24.806915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:24.806979 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:24.836752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.836785 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.836795 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:24.836809 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:24.836876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:24.863110 1078428 cri.go:89] found id: ""
	I1210 07:55:24.863134 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.863143 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:24.863153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:24.863219 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:24.888190 1078428 cri.go:89] found id: ""
	I1210 07:55:24.888214 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.888223 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:24.888230 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:24.888289 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:24.912349 1078428 cri.go:89] found id: ""
	I1210 07:55:24.912383 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.912394 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:24.912400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:24.912462 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:24.937752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.937781 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.937790 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:24.937799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:24.937811 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:24.992892 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:24.992928 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:25.010173 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:25.010241 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:25.099629 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:25.099713 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:25.099746 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:25.131383 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:25.131423 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:27.663351 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:27.674757 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:27.674843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:27.704367 1078428 cri.go:89] found id: ""
	I1210 07:55:27.704400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.704409 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:27.704420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:27.704484 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:27.731740 1078428 cri.go:89] found id: ""
	I1210 07:55:27.731773 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.731783 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:27.731790 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:27.731852 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:27.761848 1078428 cri.go:89] found id: ""
	I1210 07:55:27.761871 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.761880 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:27.761886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:27.761952 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:27.789498 1078428 cri.go:89] found id: ""
	I1210 07:55:27.789527 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.789537 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:27.789543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:27.789603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:27.815293 1078428 cri.go:89] found id: ""
	I1210 07:55:27.815320 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.815335 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:27.815342 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:27.815401 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:27.840211 1078428 cri.go:89] found id: ""
	I1210 07:55:27.840238 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.840249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:27.840256 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:27.840320 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:27.866289 1078428 cri.go:89] found id: ""
	I1210 07:55:27.866313 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.866323 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:27.866329 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:27.866388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:27.892533 1078428 cri.go:89] found id: ""
	I1210 07:55:27.892560 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.892569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:27.892578 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:27.892590 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:27.952019 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:27.952063 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:27.969597 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:27.969631 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:28.035775 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:28.035802 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:28.035816 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:28.064304 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:28.064344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:27.054076 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:29.054524 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:30.599553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:30.609953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:30.610023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:30.634355 1078428 cri.go:89] found id: ""
	I1210 07:55:30.634384 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.634393 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:30.634400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:30.634460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:30.658396 1078428 cri.go:89] found id: ""
	I1210 07:55:30.658435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.658444 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:30.658450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:30.658540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:30.683976 1078428 cri.go:89] found id: ""
	I1210 07:55:30.684014 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.684023 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:30.684030 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:30.684099 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:30.708278 1078428 cri.go:89] found id: ""
	I1210 07:55:30.708302 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.708311 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:30.708317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:30.708376 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:30.733222 1078428 cri.go:89] found id: ""
	I1210 07:55:30.733253 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.733262 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:30.733269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:30.733368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:30.758588 1078428 cri.go:89] found id: ""
	I1210 07:55:30.758614 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.758623 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:30.758630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:30.758700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:30.783735 1078428 cri.go:89] found id: ""
	I1210 07:55:30.783802 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.783826 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:30.783841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:30.783910 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:30.807833 1078428 cri.go:89] found id: ""
	I1210 07:55:30.807859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.807867 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:30.807876 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:30.807888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:30.872941 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:30.872961 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:30.872975 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:30.899140 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:30.899181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:30.926302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:30.926333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:30.982513 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:30.982550 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.499017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:33.509596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:33.509669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:33.540057 1078428 cri.go:89] found id: ""
	I1210 07:55:33.540082 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.540090 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:33.540097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:33.540160 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:33.570955 1078428 cri.go:89] found id: ""
	I1210 07:55:33.570982 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.570991 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:33.570997 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:33.571056 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:33.605930 1078428 cri.go:89] found id: ""
	I1210 07:55:33.605958 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.605968 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:33.605974 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:33.606036 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:33.634909 1078428 cri.go:89] found id: ""
	I1210 07:55:33.634932 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.634941 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:33.634947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:33.635008 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:33.659844 1078428 cri.go:89] found id: ""
	I1210 07:55:33.659912 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.659927 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:33.659935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:33.659999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:33.684878 1078428 cri.go:89] found id: ""
	I1210 07:55:33.684902 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.684911 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:33.684918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:33.684983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:33.709473 1078428 cri.go:89] found id: ""
	I1210 07:55:33.709496 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.709505 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:33.709517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:33.709580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:33.736059 1078428 cri.go:89] found id: ""
	I1210 07:55:33.736086 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.736095 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:33.736105 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:33.736117 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:33.795512 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:33.795546 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.811254 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:33.811282 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:33.878126 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:33.878148 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:33.878163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:33.904005 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:33.904041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:31.054696 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:33.054864 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:36.431681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:36.442446 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:36.442546 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:36.466520 1078428 cri.go:89] found id: ""
	I1210 07:55:36.466544 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.466553 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:36.466559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:36.466616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:36.497280 1078428 cri.go:89] found id: ""
	I1210 07:55:36.497307 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.497316 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:36.497322 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:36.497382 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:36.526966 1078428 cri.go:89] found id: ""
	I1210 07:55:36.526988 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.526998 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:36.527003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:36.527067 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:36.566317 1078428 cri.go:89] found id: ""
	I1210 07:55:36.566342 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.566351 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:36.566357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:36.566432 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:36.598673 1078428 cri.go:89] found id: ""
	I1210 07:55:36.598699 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.598716 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:36.598722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:36.598795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:36.638514 1078428 cri.go:89] found id: ""
	I1210 07:55:36.638537 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.638545 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:36.638551 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:36.638621 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:36.663534 1078428 cri.go:89] found id: ""
	I1210 07:55:36.663603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.663623 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:36.663630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:36.663715 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:36.692427 1078428 cri.go:89] found id: ""
	I1210 07:55:36.692451 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.692461 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:36.692471 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:36.692482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:36.717965 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:36.718003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:36.749638 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:36.749668 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:36.806519 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:36.806562 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:36.823288 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:36.823315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:36.888077 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.389725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:39.400775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:39.400867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:39.426362 1078428 cri.go:89] found id: ""
	I1210 07:55:39.426389 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.426398 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:39.426407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:39.426555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:39.455943 1078428 cri.go:89] found id: ""
	I1210 07:55:39.455969 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.455978 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:39.455984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:39.456043 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:39.484097 1078428 cri.go:89] found id: ""
	I1210 07:55:39.484127 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.484142 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:39.484150 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:39.484209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1210 07:55:35.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:37.554652 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:40.054927 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:39.510381 1078428 cri.go:89] found id: ""
	I1210 07:55:39.510408 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.510417 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:39.510423 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:39.510508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:39.534754 1078428 cri.go:89] found id: ""
	I1210 07:55:39.534819 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.534838 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:39.534845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:39.534903 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:39.577369 1078428 cri.go:89] found id: ""
	I1210 07:55:39.577400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.577409 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:39.577416 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:39.577519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:39.607302 1078428 cri.go:89] found id: ""
	I1210 07:55:39.607329 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.607348 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:39.607355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:39.607429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:39.637231 1078428 cri.go:89] found id: ""
	I1210 07:55:39.637270 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.637282 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:39.637292 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:39.637305 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:39.694701 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:39.694745 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:39.711729 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:39.711761 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:39.777959 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.777980 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:39.777995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:39.802829 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:39.802869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:42.336278 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:42.348869 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:42.348958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:42.376684 1078428 cri.go:89] found id: ""
	I1210 07:55:42.376751 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.376766 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:42.376774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:42.376834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:42.401855 1078428 cri.go:89] found id: ""
	I1210 07:55:42.401881 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.401890 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:42.401897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:42.401956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:42.429508 1078428 cri.go:89] found id: ""
	I1210 07:55:42.429532 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.429541 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:42.429547 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:42.429605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:42.453954 1078428 cri.go:89] found id: ""
	I1210 07:55:42.453978 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.453988 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:42.453994 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:42.454052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:42.480307 1078428 cri.go:89] found id: ""
	I1210 07:55:42.480372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.480386 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:42.480393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:42.480465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:42.505157 1078428 cri.go:89] found id: ""
	I1210 07:55:42.505189 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.505198 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:42.505205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:42.505272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:42.530482 1078428 cri.go:89] found id: ""
	I1210 07:55:42.530505 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.530513 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:42.530520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:42.530580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:42.563929 1078428 cri.go:89] found id: ""
	I1210 07:55:42.563996 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.564019 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:42.564041 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:42.564081 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:42.627607 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:42.627645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:42.644032 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:42.644059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:42.709684 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:42.709704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:42.709717 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:42.735150 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:42.735190 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:42.554153 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:44.554944 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:45.263314 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:45.276890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:45.276965 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:45.320051 1078428 cri.go:89] found id: ""
	I1210 07:55:45.320079 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.320089 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:45.320096 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:45.320155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:45.357108 1078428 cri.go:89] found id: ""
	I1210 07:55:45.357143 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.357153 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:45.357159 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:45.357235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:45.386251 1078428 cri.go:89] found id: ""
	I1210 07:55:45.386281 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.386290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:45.386296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:45.386355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:45.411934 1078428 cri.go:89] found id: ""
	I1210 07:55:45.411960 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.411969 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:45.411975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:45.412034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:45.438194 1078428 cri.go:89] found id: ""
	I1210 07:55:45.438221 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.438236 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:45.438242 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:45.438299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:45.462840 1078428 cri.go:89] found id: ""
	I1210 07:55:45.462864 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.462874 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:45.462880 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:45.462938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:45.487271 1078428 cri.go:89] found id: ""
	I1210 07:55:45.487296 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.487304 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:45.487311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:45.487368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:45.512829 1078428 cri.go:89] found id: ""
	I1210 07:55:45.512859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.512868 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:45.512877 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:45.512888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:45.592088 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:45.592106 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:45.592119 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:45.625233 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:45.625268 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:45.653443 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:45.653475 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:45.708240 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:45.708280 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.225757 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:48.236296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:48.236369 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:48.261289 1078428 cri.go:89] found id: ""
	I1210 07:55:48.261312 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.261320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:48.261337 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:48.261400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:48.286722 1078428 cri.go:89] found id: ""
	I1210 07:55:48.286746 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.286755 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:48.286761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:48.286819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:48.322426 1078428 cri.go:89] found id: ""
	I1210 07:55:48.322453 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.322484 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:48.322507 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:48.322588 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:48.351023 1078428 cri.go:89] found id: ""
	I1210 07:55:48.351052 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.351062 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:48.351068 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:48.351126 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:48.378519 1078428 cri.go:89] found id: ""
	I1210 07:55:48.378542 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.378550 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:48.378556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:48.378616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:48.403355 1078428 cri.go:89] found id: ""
	I1210 07:55:48.403382 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.403392 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:48.403398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:48.403478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:48.427960 1078428 cri.go:89] found id: ""
	I1210 07:55:48.427986 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.427995 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:48.428001 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:48.428059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:48.451603 1078428 cri.go:89] found id: ""
	I1210 07:55:48.451670 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.451696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:48.451714 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:48.451727 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:48.506052 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:48.506088 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.523423 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:48.523453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:48.594581 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:48.594606 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:48.594619 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:48.622945 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:48.622982 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:47.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:49.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:51.154448 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:51.165850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:51.165926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:51.191582 1078428 cri.go:89] found id: ""
	I1210 07:55:51.191607 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.191615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:51.191622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:51.191681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:51.216289 1078428 cri.go:89] found id: ""
	I1210 07:55:51.216314 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.216324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:51.216331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:51.216390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:51.245299 1078428 cri.go:89] found id: ""
	I1210 07:55:51.245324 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.245333 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:51.245339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:51.245400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:51.269348 1078428 cri.go:89] found id: ""
	I1210 07:55:51.269372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.269380 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:51.269387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:51.269443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:51.296327 1078428 cri.go:89] found id: ""
	I1210 07:55:51.296350 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.296360 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:51.296367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:51.296433 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:51.326976 1078428 cri.go:89] found id: ""
	I1210 07:55:51.326997 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.327005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:51.327011 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:51.327069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:51.360781 1078428 cri.go:89] found id: ""
	I1210 07:55:51.360857 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.360873 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:51.360881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:51.360960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:51.384754 1078428 cri.go:89] found id: ""
	I1210 07:55:51.384779 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.384788 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:51.384799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:51.384810 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:51.443446 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:51.443483 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:51.461527 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:51.461559 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:51.529060 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:51.529096 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:51.529109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:51.561037 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:51.561354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:54.111711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:54.122707 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:54.122781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:54.152821 1078428 cri.go:89] found id: ""
	I1210 07:55:54.152853 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.152867 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:54.152878 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:54.152961 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:54.180559 1078428 cri.go:89] found id: ""
	I1210 07:55:54.180583 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.180591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:54.180598 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:54.180662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:54.208251 1078428 cri.go:89] found id: ""
	I1210 07:55:54.208276 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.208285 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:54.208292 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:54.208349 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:54.233630 1078428 cri.go:89] found id: ""
	I1210 07:55:54.233655 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.233664 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:54.233670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:54.233727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:54.258409 1078428 cri.go:89] found id: ""
	I1210 07:55:54.258435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.258443 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:54.258450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:54.258533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:54.282200 1078428 cri.go:89] found id: ""
	I1210 07:55:54.282234 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.282242 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:54.282248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:54.282306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:54.326329 1078428 cri.go:89] found id: ""
	I1210 07:55:54.326352 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.326361 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:54.326367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:54.326428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:54.353371 1078428 cri.go:89] found id: ""
	I1210 07:55:54.353396 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.353405 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:54.353415 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:54.353429 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:54.412987 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:54.413025 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:54.429633 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:54.429718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:51.553930 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:54.497491 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:54.497530 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:54.497544 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:54.523210 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:54.523247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.066626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:57.077561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:57.077642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:57.102249 1078428 cri.go:89] found id: ""
	I1210 07:55:57.102273 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.102282 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:57.102289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:57.102352 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:57.126387 1078428 cri.go:89] found id: ""
	I1210 07:55:57.126413 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.126421 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:57.126427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:57.126506 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:57.151315 1078428 cri.go:89] found id: ""
	I1210 07:55:57.151341 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.151351 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:57.151357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:57.151417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:57.180045 1078428 cri.go:89] found id: ""
	I1210 07:55:57.180074 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.180083 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:57.180090 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:57.180150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:57.205199 1078428 cri.go:89] found id: ""
	I1210 07:55:57.205225 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.205233 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:57.205240 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:57.205299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:57.233971 1078428 cri.go:89] found id: ""
	I1210 07:55:57.233999 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.234009 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:57.234015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:57.234078 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:57.258568 1078428 cri.go:89] found id: ""
	I1210 07:55:57.258594 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.258604 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:57.258610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:57.258668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:57.282764 1078428 cri.go:89] found id: ""
	I1210 07:55:57.282790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.282800 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:57.282810 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:57.282823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:57.299427 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:57.299453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:57.374740 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:57.374810 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:57.374851 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:57.400786 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:57.400822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.427735 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:57.427767 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:56.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:58.054190 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:00.055015 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:59.984110 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:59.994599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:59.994677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:00.044693 1078428 cri.go:89] found id: ""
	I1210 07:56:00.044863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.044893 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:00.044928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:00.045024 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:00.118046 1078428 cri.go:89] found id: ""
	I1210 07:56:00.118124 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.118150 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:00.118171 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:00.119167 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:00.182111 1078428 cri.go:89] found id: ""
	I1210 07:56:00.182136 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.182145 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:00.182152 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:00.182960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:00.239971 1078428 cri.go:89] found id: ""
	I1210 07:56:00.239996 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.240006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:00.240013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:00.240085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:00.287888 1078428 cri.go:89] found id: ""
	I1210 07:56:00.287927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.287937 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:00.287945 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:00.288014 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:00.352509 1078428 cri.go:89] found id: ""
	I1210 07:56:00.352556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.352566 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:00.352593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:00.352712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:00.421383 1078428 cri.go:89] found id: ""
	I1210 07:56:00.421421 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.421430 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:00.421437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:00.421521 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:00.456737 1078428 cri.go:89] found id: ""
	I1210 07:56:00.456766 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.456776 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:00.456786 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:00.456803 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:00.539348 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:00.539370 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:00.539385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:00.569574 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:00.569616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:00.613655 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:00.613680 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:00.671124 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:00.671163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.187739 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:03.198133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:03.198208 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:03.223791 1078428 cri.go:89] found id: ""
	I1210 07:56:03.223818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.223828 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:03.223834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:03.223894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:03.248620 1078428 cri.go:89] found id: ""
	I1210 07:56:03.248644 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.248653 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:03.248659 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:03.248720 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:03.273951 1078428 cri.go:89] found id: ""
	I1210 07:56:03.273975 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.273985 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:03.273991 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:03.274053 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:03.300277 1078428 cri.go:89] found id: ""
	I1210 07:56:03.300300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.300309 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:03.300315 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:03.300372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:03.332941 1078428 cri.go:89] found id: ""
	I1210 07:56:03.332967 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.332977 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:03.332983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:03.333038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:03.367066 1078428 cri.go:89] found id: ""
	I1210 07:56:03.367091 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.367100 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:03.367106 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:03.367164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:03.391075 1078428 cri.go:89] found id: ""
	I1210 07:56:03.391098 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.391106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:03.391112 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:03.391170 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:03.415021 1078428 cri.go:89] found id: ""
	I1210 07:56:03.415049 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.415058 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:03.415068 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:03.415079 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:03.440424 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:03.440470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:03.468290 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:03.468319 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:03.525567 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:03.525601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.541470 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:03.541505 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:03.626098 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:56:02.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:05.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:06.126647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:06.137759 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:06.137831 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:06.163154 1078428 cri.go:89] found id: ""
	I1210 07:56:06.163181 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.163191 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:06.163198 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:06.163265 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:06.192495 1078428 cri.go:89] found id: ""
	I1210 07:56:06.192521 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.192530 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:06.192536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:06.192615 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:06.220976 1078428 cri.go:89] found id: ""
	I1210 07:56:06.221009 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.221017 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:06.221025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:06.221134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:06.246400 1078428 cri.go:89] found id: ""
	I1210 07:56:06.246427 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.246436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:06.246442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:06.246523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:06.272644 1078428 cri.go:89] found id: ""
	I1210 07:56:06.272667 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.272675 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:06.272681 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:06.272738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:06.300567 1078428 cri.go:89] found id: ""
	I1210 07:56:06.300636 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.300648 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:06.300655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:06.300726 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:06.332683 1078428 cri.go:89] found id: ""
	I1210 07:56:06.332750 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.332773 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:06.332795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:06.332881 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:06.366018 1078428 cri.go:89] found id: ""
	I1210 07:56:06.366099 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.366124 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:06.366149 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:06.366177 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:06.422922 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:06.422958 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:06.439199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:06.439231 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:06.512644 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:06.512669 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:06.512682 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:06.537590 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:06.537625 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:09.085608 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:09.095930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:09.096006 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:09.119422 1078428 cri.go:89] found id: ""
	I1210 07:56:09.119445 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.119454 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:09.119460 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:09.119518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:09.145193 1078428 cri.go:89] found id: ""
	I1210 07:56:09.145220 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.145230 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:09.145236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:09.145296 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:09.170538 1078428 cri.go:89] found id: ""
	I1210 07:56:09.170567 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.170576 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:09.170582 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:09.170640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:09.199713 1078428 cri.go:89] found id: ""
	I1210 07:56:09.199741 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.199749 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:09.199756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:09.199815 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:09.224005 1078428 cri.go:89] found id: ""
	I1210 07:56:09.224037 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.224046 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:09.224053 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:09.224112 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:09.254251 1078428 cri.go:89] found id: ""
	I1210 07:56:09.254273 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.254283 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:09.254290 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:09.254348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:09.280458 1078428 cri.go:89] found id: ""
	I1210 07:56:09.280484 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.280493 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:09.280500 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:09.280565 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:09.320912 1078428 cri.go:89] found id: ""
	I1210 07:56:09.320943 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.320952 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:09.320961 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:09.320974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:09.386817 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:09.386854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:09.402878 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:09.402954 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:09.472013 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:09.472092 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:09.472114 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:56:07.054571 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:09.054701 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:09.497983 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:09.498020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.030207 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:12.040966 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:12.041087 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:12.069314 1078428 cri.go:89] found id: ""
	I1210 07:56:12.069346 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.069356 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:12.069362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:12.069424 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:12.096321 1078428 cri.go:89] found id: ""
	I1210 07:56:12.096400 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.096423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:12.096438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:12.096519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:12.122859 1078428 cri.go:89] found id: ""
	I1210 07:56:12.122887 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.122896 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:12.122903 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:12.122985 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:12.148481 1078428 cri.go:89] found id: ""
	I1210 07:56:12.148505 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.148514 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:12.148520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:12.148633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:12.172954 1078428 cri.go:89] found id: ""
	I1210 07:56:12.172978 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.172995 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:12.173003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:12.173063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:12.198414 1078428 cri.go:89] found id: ""
	I1210 07:56:12.198436 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.198446 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:12.198453 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:12.198530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:12.227549 1078428 cri.go:89] found id: ""
	I1210 07:56:12.227576 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.227586 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:12.227592 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:12.227651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:12.255277 1078428 cri.go:89] found id: ""
	I1210 07:56:12.255300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.255309 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:12.255318 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:12.255330 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:12.343072 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:12.343095 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:12.343109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:12.370845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:12.370884 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.401190 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:12.401217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:12.456146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:12.456181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:56:11.554344 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:13.554843 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:14.972152 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:14.983046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:14.983121 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:15.031099 1078428 cri.go:89] found id: ""
	I1210 07:56:15.031183 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.031217 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:15.031260 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:15.031373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:15.061619 1078428 cri.go:89] found id: ""
	I1210 07:56:15.061646 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.061655 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:15.061662 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:15.061728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:15.088678 1078428 cri.go:89] found id: ""
	I1210 07:56:15.088701 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.088709 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:15.088716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:15.088781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:15.118776 1078428 cri.go:89] found id: ""
	I1210 07:56:15.118854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.118872 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:15.118881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:15.118945 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:15.144691 1078428 cri.go:89] found id: ""
	I1210 07:56:15.144717 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.144727 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:15.144734 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:15.144799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:15.169827 1078428 cri.go:89] found id: ""
	I1210 07:56:15.169854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.169863 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:15.169870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:15.169927 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:15.196425 1078428 cri.go:89] found id: ""
	I1210 07:56:15.196459 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.196468 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:15.196474 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:15.196533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:15.221736 1078428 cri.go:89] found id: ""
	I1210 07:56:15.221763 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.221772 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:15.221782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:15.221794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:15.237860 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:15.237890 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:15.309823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:15.309847 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:15.309860 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:15.342939 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:15.342990 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:15.376812 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:15.376839 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:17.934235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:17.945317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:17.945396 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:17.971659 1078428 cri.go:89] found id: ""
	I1210 07:56:17.971685 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.971694 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:17.971700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:17.971753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:17.996434 1078428 cri.go:89] found id: ""
	I1210 07:56:17.996476 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.996488 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:17.996495 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:17.996560 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:18.024303 1078428 cri.go:89] found id: ""
	I1210 07:56:18.024338 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.024347 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:18.024354 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:18.024416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:18.049317 1078428 cri.go:89] found id: ""
	I1210 07:56:18.049344 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.049353 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:18.049360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:18.049421 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:18.079586 1078428 cri.go:89] found id: ""
	I1210 07:56:18.079611 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.079620 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:18.079627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:18.079686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:18.108486 1078428 cri.go:89] found id: ""
	I1210 07:56:18.108511 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.108519 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:18.108526 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:18.108601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:18.137645 1078428 cri.go:89] found id: ""
	I1210 07:56:18.137671 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.137680 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:18.137686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:18.137767 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:18.161838 1078428 cri.go:89] found id: ""
	I1210 07:56:18.161863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.161874 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:18.161883 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:18.161916 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:18.235505 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:18.235526 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:18.235539 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:18.260551 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:18.260589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:18.288267 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:18.288296 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:18.349132 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:18.349215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:56:16.054030 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:18.054084 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:20.868569 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:20.879574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:20.879649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:20.904201 1078428 cri.go:89] found id: ""
	I1210 07:56:20.904226 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.904235 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:20.904241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:20.904299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:20.929396 1078428 cri.go:89] found id: ""
	I1210 07:56:20.929423 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.929432 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:20.929439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:20.929514 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:20.954953 1078428 cri.go:89] found id: ""
	I1210 07:56:20.954984 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.954993 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:20.954999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:20.955058 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:20.978741 1078428 cri.go:89] found id: ""
	I1210 07:56:20.978767 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.978776 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:20.978782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:20.978841 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:21.003286 1078428 cri.go:89] found id: ""
	I1210 07:56:21.003313 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.003323 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:21.003330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:21.003402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:21.034505 1078428 cri.go:89] found id: ""
	I1210 07:56:21.034527 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.034536 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:21.034543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:21.034605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:21.058861 1078428 cri.go:89] found id: ""
	I1210 07:56:21.058885 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.058894 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:21.058900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:21.058958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:21.082740 1078428 cri.go:89] found id: ""
	I1210 07:56:21.082764 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.082773 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:21.082782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:21.082794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:21.098247 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:21.098276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:21.161962 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:21.161982 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:21.161995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:21.187272 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:21.187314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:21.214180 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:21.214213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:23.769450 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:23.780372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:23.780505 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:23.817607 1078428 cri.go:89] found id: ""
	I1210 07:56:23.817631 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.817641 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:23.817648 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:23.817709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:23.848903 1078428 cri.go:89] found id: ""
	I1210 07:56:23.848927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.848949 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:23.848960 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:23.849023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:23.877281 1078428 cri.go:89] found id: ""
	I1210 07:56:23.877305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.877314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:23.877320 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:23.877387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:23.903972 1078428 cri.go:89] found id: ""
	I1210 07:56:23.903997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.904006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:23.904013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:23.904089 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:23.929481 1078428 cri.go:89] found id: ""
	I1210 07:56:23.929508 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.929517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:23.929525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:23.929586 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:23.954626 1078428 cri.go:89] found id: ""
	I1210 07:56:23.954665 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.954676 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:23.954683 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:23.954785 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:23.980069 1078428 cri.go:89] found id: ""
	I1210 07:56:23.980102 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.980111 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:23.980117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:23.980176 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:24.005963 1078428 cri.go:89] found id: ""
	I1210 07:56:24.005987 1078428 logs.go:282] 0 containers: []
	W1210 07:56:24.005996 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:24.006006 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:24.006017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:24.036028 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:24.036065 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:24.065541 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:24.065571 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:24.126584 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:24.126630 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:24.143358 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:24.143391 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:24.208974 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:56:20.554242 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:22.554679 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:25.054999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:26.710619 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:26.721267 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:26.721343 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:26.746073 1078428 cri.go:89] found id: ""
	I1210 07:56:26.746100 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.746109 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:26.746115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:26.746178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:26.772432 1078428 cri.go:89] found id: ""
	I1210 07:56:26.772456 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.772472 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:26.772479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:26.772538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:26.809928 1078428 cri.go:89] found id: ""
	I1210 07:56:26.809954 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.809964 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:26.809970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:26.810026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:26.837500 1078428 cri.go:89] found id: ""
	I1210 07:56:26.837522 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.837531 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:26.837538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:26.837592 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:26.864667 1078428 cri.go:89] found id: ""
	I1210 07:56:26.864693 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.864702 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:26.864708 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:26.864768 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:26.892330 1078428 cri.go:89] found id: ""
	I1210 07:56:26.892359 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.892368 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:26.892374 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:26.892457 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:26.916781 1078428 cri.go:89] found id: ""
	I1210 07:56:26.916807 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.916815 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:26.916822 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:26.916902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:26.945103 1078428 cri.go:89] found id: ""
	I1210 07:56:26.945128 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.945137 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:26.945147 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:26.945178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:27.001893 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:27.001933 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:27.020119 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:27.020149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:27.092626 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:27.092690 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:27.092712 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:27.118838 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:27.118873 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:27.554852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:29.554968 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:29.646997 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:29.659058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:29.659139 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:29.684417 1078428 cri.go:89] found id: ""
	I1210 07:56:29.684442 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.684452 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:29.684459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:29.684532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:29.713716 1078428 cri.go:89] found id: ""
	I1210 07:56:29.713747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.713756 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:29.713762 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:29.713829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:29.742671 1078428 cri.go:89] found id: ""
	I1210 07:56:29.742747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.742761 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:29.742769 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:29.742834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:29.767461 1078428 cri.go:89] found id: ""
	I1210 07:56:29.767488 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.767497 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:29.767503 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:29.767590 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:29.791629 1078428 cri.go:89] found id: ""
	I1210 07:56:29.791655 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.791664 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:29.791670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:29.791728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:29.822213 1078428 cri.go:89] found id: ""
	I1210 07:56:29.822240 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.822249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:29.822255 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:29.822317 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:29.854606 1078428 cri.go:89] found id: ""
	I1210 07:56:29.854633 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.854643 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:29.854649 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:29.854709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:29.880033 1078428 cri.go:89] found id: ""
	I1210 07:56:29.880059 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.880068 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:29.880077 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:29.880090 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:29.948475 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:29.948498 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:29.948512 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:29.974136 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:29.974171 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:30.013967 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:30.014008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:30.097748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:30.097788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.617610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:32.628661 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:32.628735 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:32.652564 1078428 cri.go:89] found id: ""
	I1210 07:56:32.652594 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.652603 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:32.652610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:32.652668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:32.680277 1078428 cri.go:89] found id: ""
	I1210 07:56:32.680302 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.680310 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:32.680317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:32.680379 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:32.704183 1078428 cri.go:89] found id: ""
	I1210 07:56:32.704207 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.704216 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:32.704222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:32.704285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:32.729141 1078428 cri.go:89] found id: ""
	I1210 07:56:32.729165 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.729174 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:32.729180 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:32.729237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:32.753460 1078428 cri.go:89] found id: ""
	I1210 07:56:32.753482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.753490 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:32.753496 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:32.753562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:32.781036 1078428 cri.go:89] found id: ""
	I1210 07:56:32.781061 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.781069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:32.781076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:32.781131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:32.816565 1078428 cri.go:89] found id: ""
	I1210 07:56:32.816586 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.816594 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:32.816599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:32.816655 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:32.848807 1078428 cri.go:89] found id: ""
	I1210 07:56:32.848832 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.848841 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:32.848849 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:32.848861 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:32.908343 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:32.908379 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.924367 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:32.924396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:32.994542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:32.994565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:32.994581 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:33.024802 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:33.024842 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:32.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:34.554950 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:35.557491 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:35.568723 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:35.568795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:35.601157 1078428 cri.go:89] found id: ""
	I1210 07:56:35.601184 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.601193 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:35.601200 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:35.601260 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:35.628459 1078428 cri.go:89] found id: ""
	I1210 07:56:35.628494 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.628503 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:35.628509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:35.628570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:35.656310 1078428 cri.go:89] found id: ""
	I1210 07:56:35.656332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.656342 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:35.656348 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:35.656404 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:35.680954 1078428 cri.go:89] found id: ""
	I1210 07:56:35.680980 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.680992 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:35.680998 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:35.681055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:35.708548 1078428 cri.go:89] found id: ""
	I1210 07:56:35.708575 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.708584 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:35.708590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:35.708648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:35.736013 1078428 cri.go:89] found id: ""
	I1210 07:56:35.736040 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.736049 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:35.736056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:35.736124 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:35.760465 1078428 cri.go:89] found id: ""
	I1210 07:56:35.760495 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.760504 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:35.760511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:35.760574 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:35.785429 1078428 cri.go:89] found id: ""
	I1210 07:56:35.785451 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.785460 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:35.785469 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:35.785481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:35.871280 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:35.871302 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:35.871315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:35.897087 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:35.897124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:35.925107 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:35.925134 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:35.981188 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:35.981270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.499048 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:38.509835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:38.509908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:38.534615 1078428 cri.go:89] found id: ""
	I1210 07:56:38.534637 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.534645 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:38.534652 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:38.534708 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:38.576309 1078428 cri.go:89] found id: ""
	I1210 07:56:38.576332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.576341 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:38.576347 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:38.576407 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:38.611259 1078428 cri.go:89] found id: ""
	I1210 07:56:38.611281 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.611290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:38.611297 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:38.611357 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:38.637583 1078428 cri.go:89] found id: ""
	I1210 07:56:38.637612 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.637621 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:38.637627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:38.637686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:38.662187 1078428 cri.go:89] found id: ""
	I1210 07:56:38.662267 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.662290 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:38.662310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:38.662402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:38.686838 1078428 cri.go:89] found id: ""
	I1210 07:56:38.686861 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.686869 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:38.686876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:38.686933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:38.710788 1078428 cri.go:89] found id: ""
	I1210 07:56:38.710815 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.710824 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:38.710831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:38.710930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:38.736531 1078428 cri.go:89] found id: ""
	I1210 07:56:38.736556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.736565 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:38.736575 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:38.736589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.752335 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:38.752364 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:38.826607 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:38.826675 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:38.826688 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:38.854204 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:38.854240 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:38.883619 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:38.883647 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:37.054712 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:39.554110 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:41.439316 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:41.450451 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:41.450532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:41.476998 1078428 cri.go:89] found id: ""
	I1210 07:56:41.477022 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.477030 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:41.477036 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:41.477096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:41.502043 1078428 cri.go:89] found id: ""
	I1210 07:56:41.502069 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.502078 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:41.502084 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:41.502145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:41.526905 1078428 cri.go:89] found id: ""
	I1210 07:56:41.526931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.526940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:41.526947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:41.527007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:41.558750 1078428 cri.go:89] found id: ""
	I1210 07:56:41.558779 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.558788 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:41.558795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:41.558851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:41.596637 1078428 cri.go:89] found id: ""
	I1210 07:56:41.596664 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.596674 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:41.596680 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:41.596742 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:41.622316 1078428 cri.go:89] found id: ""
	I1210 07:56:41.622340 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.622348 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:41.622355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:41.622418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:41.648410 1078428 cri.go:89] found id: ""
	I1210 07:56:41.648482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.648511 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:41.648518 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:41.648581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:41.680776 1078428 cri.go:89] found id: ""
	I1210 07:56:41.680802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.680811 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:41.680820 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:41.680832 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:41.708185 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:41.708211 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:41.767625 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:41.767662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:41.784949 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:41.784980 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:41.871610 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:41.871632 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:41.871645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.398611 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:44.408733 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:44.408806 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:44.432507 1078428 cri.go:89] found id: ""
	I1210 07:56:44.432531 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.432540 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:44.432546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:44.432607 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:44.457597 1078428 cri.go:89] found id: ""
	I1210 07:56:44.457622 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.457631 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:44.457637 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:44.457697 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:44.485123 1078428 cri.go:89] found id: ""
	I1210 07:56:44.485149 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.485158 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:44.485165 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:44.485228 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1210 07:56:42.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:44.054891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:44.510813 1078428 cri.go:89] found id: ""
	I1210 07:56:44.510848 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.510857 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:44.510870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:44.510929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:44.534504 1078428 cri.go:89] found id: ""
	I1210 07:56:44.534528 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.534537 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:44.534543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:44.534600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:44.574866 1078428 cri.go:89] found id: ""
	I1210 07:56:44.574940 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.574962 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:44.574983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:44.575074 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:44.605450 1078428 cri.go:89] found id: ""
	I1210 07:56:44.605523 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.605546 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:44.605566 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:44.605652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:44.633965 1078428 cri.go:89] found id: ""
	I1210 07:56:44.634039 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.634064 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:44.634087 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:44.634124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:44.692591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:44.692628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:44.708687 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:44.708718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:44.774532 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:44.774581 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:44.774594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.801145 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:44.801235 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.336116 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:47.346722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:47.346793 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:47.370822 1078428 cri.go:89] found id: ""
	I1210 07:56:47.370860 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.370870 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:47.370876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:47.370948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:47.401111 1078428 cri.go:89] found id: ""
	I1210 07:56:47.401140 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.401149 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:47.401155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:47.401212 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:47.430968 1078428 cri.go:89] found id: ""
	I1210 07:56:47.430991 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.430999 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:47.431004 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:47.431063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:47.455626 1078428 cri.go:89] found id: ""
	I1210 07:56:47.455650 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.455659 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:47.455665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:47.455722 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:47.479857 1078428 cri.go:89] found id: ""
	I1210 07:56:47.479882 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.479890 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:47.479896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:47.479959 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:47.504271 1078428 cri.go:89] found id: ""
	I1210 07:56:47.504294 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.504305 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:47.504312 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:47.504373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:47.532761 1078428 cri.go:89] found id: ""
	I1210 07:56:47.532837 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.532863 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:47.532886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:47.532990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:47.570086 1078428 cri.go:89] found id: ""
	I1210 07:56:47.570108 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.570116 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:47.570125 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:47.570137 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:47.586049 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:47.586078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:47.655434 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:47.655455 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:47.655470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:47.680757 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:47.680794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.708957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:47.708986 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:46.554013 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:49.054042 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:50.265598 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:50.276268 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:50.276342 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:50.301484 1078428 cri.go:89] found id: ""
	I1210 07:56:50.301507 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.301515 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:50.301521 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:50.301582 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:50.327230 1078428 cri.go:89] found id: ""
	I1210 07:56:50.327255 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.327264 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:50.327270 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:50.327331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:50.352201 1078428 cri.go:89] found id: ""
	I1210 07:56:50.352224 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.352233 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:50.352239 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:50.352299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:50.377546 1078428 cri.go:89] found id: ""
	I1210 07:56:50.377571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.377580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:50.377586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:50.377647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:50.403517 1078428 cri.go:89] found id: ""
	I1210 07:56:50.403544 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.403552 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:50.403559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:50.403635 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:50.432794 1078428 cri.go:89] found id: ""
	I1210 07:56:50.432820 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.432829 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:50.432835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:50.432924 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:50.456905 1078428 cri.go:89] found id: ""
	I1210 07:56:50.456931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.456941 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:50.456947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:50.457013 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:50.488840 1078428 cri.go:89] found id: ""
	I1210 07:56:50.488908 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.488932 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:50.488949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:50.488962 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:50.547966 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:50.548000 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:50.565711 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:50.565789 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:50.652776 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:50.652800 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:50.652815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:50.678909 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:50.678950 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.207825 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:53.218403 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:53.218500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:53.244529 1078428 cri.go:89] found id: ""
	I1210 07:56:53.244556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.244565 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:53.244572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:53.244629 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:53.270382 1078428 cri.go:89] found id: ""
	I1210 07:56:53.270408 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.270418 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:53.270424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:53.270517 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:53.295316 1078428 cri.go:89] found id: ""
	I1210 07:56:53.295342 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.295352 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:53.295358 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:53.295425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:53.324326 1078428 cri.go:89] found id: ""
	I1210 07:56:53.324351 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.324360 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:53.324367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:53.324444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:53.349399 1078428 cri.go:89] found id: ""
	I1210 07:56:53.349425 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.349435 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:53.349441 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:53.349555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:53.374280 1078428 cri.go:89] found id: ""
	I1210 07:56:53.374305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.374314 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:53.374321 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:53.374431 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:53.398894 1078428 cri.go:89] found id: ""
	I1210 07:56:53.398920 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.398929 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:53.398935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:53.398992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:53.423872 1078428 cri.go:89] found id: ""
	I1210 07:56:53.423897 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.423907 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:53.423920 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:53.423936 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:53.440226 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:53.440258 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:53.503949 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:53.503975 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:53.503989 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:53.530691 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:53.530737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.577761 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:53.577835 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:51.054085 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:53.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:56.142597 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:56.153164 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:56.153234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:56.177358 1078428 cri.go:89] found id: ""
	I1210 07:56:56.177391 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.177400 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:56.177406 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:56.177475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:56.202573 1078428 cri.go:89] found id: ""
	I1210 07:56:56.202641 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.202657 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:56.202664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:56.202725 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:56.226758 1078428 cri.go:89] found id: ""
	I1210 07:56:56.226785 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.226795 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:56.226802 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:56.226891 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:56.250286 1078428 cri.go:89] found id: ""
	I1210 07:56:56.250310 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.250319 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:56.250327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:56.250381 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:56.276297 1078428 cri.go:89] found id: ""
	I1210 07:56:56.276375 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.276391 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:56.276398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:56.276458 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:56.301334 1078428 cri.go:89] found id: ""
	I1210 07:56:56.301366 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.301375 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:56.301382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:56.301450 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:56.325521 1078428 cri.go:89] found id: ""
	I1210 07:56:56.325557 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.325566 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:56.325572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:56.325640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:56.351180 1078428 cri.go:89] found id: ""
	I1210 07:56:56.351219 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.351228 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:56.351237 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:56.351249 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:56.406556 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:56.406592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:56.422756 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:56.422788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:56.486945 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:56.486967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:56.486983 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:56.512575 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:56.512616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:59.046618 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:59.059092 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:59.059161 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:59.089542 1078428 cri.go:89] found id: ""
	I1210 07:56:59.089571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.089580 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:59.089586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:59.089648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:59.118669 1078428 cri.go:89] found id: ""
	I1210 07:56:59.118691 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.118700 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:59.118706 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:59.118770 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:59.143775 1078428 cri.go:89] found id: ""
	I1210 07:56:59.143802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.143814 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:59.143821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:59.143880 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:59.167972 1078428 cri.go:89] found id: ""
	I1210 07:56:59.167997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.168006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:59.168012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:59.168088 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:59.195291 1078428 cri.go:89] found id: ""
	I1210 07:56:59.195316 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.195325 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:59.195331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:59.195434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:59.219900 1078428 cri.go:89] found id: ""
	I1210 07:56:59.219928 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.219937 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:59.219943 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:59.220002 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:59.252792 1078428 cri.go:89] found id: ""
	I1210 07:56:59.252818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.252827 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:59.252834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:59.252894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:59.281785 1078428 cri.go:89] found id: ""
	I1210 07:56:59.281808 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.281823 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:59.281832 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:59.281843 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:59.337457 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:59.337496 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:59.353622 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:59.353650 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:59.423704 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:59.423725 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:59.423739 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:59.449814 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:59.449853 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:55.554362 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:57.554656 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:59.554765 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:01.979246 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:01.990999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:01.991072 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:02.022990 1078428 cri.go:89] found id: ""
	I1210 07:57:02.023028 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.023038 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:02.023046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:02.023109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:02.050830 1078428 cri.go:89] found id: ""
	I1210 07:57:02.050857 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.050867 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:02.050873 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:02.050930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:02.080878 1078428 cri.go:89] found id: ""
	I1210 07:57:02.080901 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.080909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:02.080915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:02.080974 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:02.111744 1078428 cri.go:89] found id: ""
	I1210 07:57:02.111766 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.111774 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:02.111780 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:02.111838 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:02.139560 1078428 cri.go:89] found id: ""
	I1210 07:57:02.139587 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.139596 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:02.139602 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:02.139662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:02.164249 1078428 cri.go:89] found id: ""
	I1210 07:57:02.164274 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.164282 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:02.164289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:02.164347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:02.191165 1078428 cri.go:89] found id: ""
	I1210 07:57:02.191187 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.191196 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:02.191202 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:02.191280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:02.220305 1078428 cri.go:89] found id: ""
	I1210 07:57:02.220371 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.220395 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:02.220419 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:02.220447 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:02.275451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:02.275490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:02.291722 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:02.291797 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:02.357294 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:02.357319 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:02.357333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:02.382557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:02.382591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:57:02.053955 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:04.553976 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:04.913285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:04.924140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:04.924214 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:04.949752 1078428 cri.go:89] found id: ""
	I1210 07:57:04.949787 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.949796 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:04.949803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:04.949869 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:04.974850 1078428 cri.go:89] found id: ""
	I1210 07:57:04.974876 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.974886 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:04.974892 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:04.974949 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:04.999787 1078428 cri.go:89] found id: ""
	I1210 07:57:04.999853 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.999868 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:04.999875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:04.999937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:05.031544 1078428 cri.go:89] found id: ""
	I1210 07:57:05.031570 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.031580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:05.031586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:05.031644 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:05.068235 1078428 cri.go:89] found id: ""
	I1210 07:57:05.068262 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.068272 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:05.068278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:05.068337 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:05.101435 1078428 cri.go:89] found id: ""
	I1210 07:57:05.101462 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.101472 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:05.101479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:05.101545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:05.129616 1078428 cri.go:89] found id: ""
	I1210 07:57:05.129640 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.129648 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:05.129654 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:05.129733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:05.155520 1078428 cri.go:89] found id: ""
	I1210 07:57:05.155544 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.155553 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:05.155563 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:05.155575 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:05.212400 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:05.212436 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:05.228606 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:05.228643 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:05.292822 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:05.292845 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:05.292858 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:05.318694 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:05.318732 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:07.846610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:07.857861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:07.857939 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:07.885093 1078428 cri.go:89] found id: ""
	I1210 07:57:07.885115 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.885124 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:07.885130 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:07.885192 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:07.909018 1078428 cri.go:89] found id: ""
	I1210 07:57:07.909043 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.909052 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:07.909058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:07.909116 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:07.935262 1078428 cri.go:89] found id: ""
	I1210 07:57:07.935288 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.935298 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:07.935303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:07.935366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:07.959939 1078428 cri.go:89] found id: ""
	I1210 07:57:07.959965 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.959974 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:07.959981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:07.960039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:07.991314 1078428 cri.go:89] found id: ""
	I1210 07:57:07.991341 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.991350 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:07.991356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:07.991415 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:08.020601 1078428 cri.go:89] found id: ""
	I1210 07:57:08.020628 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.020638 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:08.020645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:08.020709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:08.049221 1078428 cri.go:89] found id: ""
	I1210 07:57:08.049250 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.049259 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:08.049265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:08.049323 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:08.078839 1078428 cri.go:89] found id: ""
	I1210 07:57:08.078862 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.078870 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:08.078883 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:08.078896 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:08.098811 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:08.098888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:08.168958 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:08.169024 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:08.169046 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:08.195261 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:08.195297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:08.222093 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:08.222121 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:57:06.554902 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:09.054181 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:10.778721 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:10.791524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:10.791597 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:10.819485 1078428 cri.go:89] found id: ""
	I1210 07:57:10.819507 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.819519 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:10.819525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:10.819585 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:10.872623 1078428 cri.go:89] found id: ""
	I1210 07:57:10.872646 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.872654 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:10.872660 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:10.872724 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:10.898357 1078428 cri.go:89] found id: ""
	I1210 07:57:10.898378 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.898387 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:10.898393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:10.898448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:10.923976 1078428 cri.go:89] found id: ""
	I1210 07:57:10.924000 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.924009 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:10.924016 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:10.924095 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:10.952951 1078428 cri.go:89] found id: ""
	I1210 07:57:10.952986 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.952996 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:10.953002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:10.953069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:10.977761 1078428 cri.go:89] found id: ""
	I1210 07:57:10.977793 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.977802 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:10.977808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:10.977878 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:11.009022 1078428 cri.go:89] found id: ""
	I1210 07:57:11.009052 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.009069 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:11.009076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:11.009147 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:11.034444 1078428 cri.go:89] found id: ""
	I1210 07:57:11.034493 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.034502 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:11.034512 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:11.034523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:11.098059 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:11.098096 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:11.117339 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:11.117370 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:11.190897 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:11.190919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:11.190932 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:11.215685 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:11.215722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:13.744333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:13.754962 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:13.755031 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:13.783588 1078428 cri.go:89] found id: ""
	I1210 07:57:13.783611 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.783619 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:13.783625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:13.783683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:13.819100 1078428 cri.go:89] found id: ""
	I1210 07:57:13.819122 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.819130 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:13.819136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:13.819193 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:13.860234 1078428 cri.go:89] found id: ""
	I1210 07:57:13.860257 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.860266 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:13.860272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:13.860332 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:13.886331 1078428 cri.go:89] found id: ""
	I1210 07:57:13.886406 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.886418 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:13.886424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:13.886540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:13.911054 1078428 cri.go:89] found id: ""
	I1210 07:57:13.911080 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.911089 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:13.911097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:13.911172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:13.934983 1078428 cri.go:89] found id: ""
	I1210 07:57:13.935051 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.935066 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:13.935073 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:13.935131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:13.960415 1078428 cri.go:89] found id: ""
	I1210 07:57:13.960440 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.960449 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:13.960455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:13.960538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:13.985917 1078428 cri.go:89] found id: ""
	I1210 07:57:13.985964 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.985974 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:13.985983 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:13.985995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:14.046091 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:14.046336 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:14.068485 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:14.068513 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:14.145212 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:14.145235 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:14.145248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:14.170375 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:14.170409 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:57:11.553974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:13.554028 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:15.554374 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:17.554945 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:19.054633 1077343 node_ready.go:38] duration metric: took 6m0.001135979s for node "no-preload-587009" to be "Ready" ...
	I1210 07:57:19.057729 1077343 out.go:203] 
	W1210 07:57:19.060573 1077343 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:57:19.060592 1077343 out.go:285] * 
	W1210 07:57:19.062943 1077343 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:57:19.065570 1077343 out.go:203] 
	I1210 07:57:16.699528 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:16.710231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:16.710301 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:16.734299 1078428 cri.go:89] found id: ""
	I1210 07:57:16.734325 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.734333 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:16.734339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:16.734402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:16.759890 1078428 cri.go:89] found id: ""
	I1210 07:57:16.759916 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.759925 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:16.759934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:16.760017 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:16.788155 1078428 cri.go:89] found id: ""
	I1210 07:57:16.788181 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.788191 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:16.788197 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:16.788256 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:16.817801 1078428 cri.go:89] found id: ""
	I1210 07:57:16.817828 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.817837 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:16.817844 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:16.817904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:16.845878 1078428 cri.go:89] found id: ""
	I1210 07:57:16.845905 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.845913 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:16.845919 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:16.845975 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:16.873613 1078428 cri.go:89] found id: ""
	I1210 07:57:16.873641 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.873651 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:16.873658 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:16.873719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:16.898666 1078428 cri.go:89] found id: ""
	I1210 07:57:16.898689 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.898698 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:16.898704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:16.898762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:16.922533 1078428 cri.go:89] found id: ""
	I1210 07:57:16.922560 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.922569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:16.922579 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:16.922591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:16.948298 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:16.948341 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:16.976671 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:16.976699 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:17.033642 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:17.033681 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:17.052529 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:17.052568 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:17.131312 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:19.632225 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:19.644243 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:19.644343 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:19.682502 1078428 cri.go:89] found id: ""
	I1210 07:57:19.682536 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.682546 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:19.682553 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:19.682615 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:19.709431 1078428 cri.go:89] found id: ""
	I1210 07:57:19.709455 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.709464 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:19.709470 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:19.709532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:19.739384 1078428 cri.go:89] found id: ""
	I1210 07:57:19.739426 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.739436 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:19.739442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:19.739502 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:19.767244 1078428 cri.go:89] found id: ""
	I1210 07:57:19.767266 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.767274 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:19.767281 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:19.767338 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:19.802183 1078428 cri.go:89] found id: ""
	I1210 07:57:19.802207 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.802216 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:19.802222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:19.802283 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:19.864351 1078428 cri.go:89] found id: ""
	I1210 07:57:19.864373 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.864381 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:19.864388 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:19.864446 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:19.923313 1078428 cri.go:89] found id: ""
	I1210 07:57:19.923336 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.923344 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:19.923350 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:19.923412 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:19.956689 1078428 cri.go:89] found id: ""
	I1210 07:57:19.956768 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.956792 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:19.956836 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:19.956870 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:20.020110 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:20.020150 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:20.041105 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:20.041136 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:20.171782 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:20.151749   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.157532   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.158302   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161399   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161675   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:20.151749   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.157532   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.158302   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161399   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161675   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:20.171803 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:20.171817 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:20.212388 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:20.212467 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:22.753904 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:22.771857 1078428 out.go:203] 
	W1210 07:57:22.774733 1078428 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:57:22.774767 1078428 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:57:22.774778 1078428 out.go:285] * Related issues:
	W1210 07:57:22.774790 1078428 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:57:22.774803 1078428 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:57:22.777684 1078428 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780066864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780147053Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780256331Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780332672Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780400546Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780472966Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780539559Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780607409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.781584850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.781686825Z" level=info msg="Connect containerd service"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.782018760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.782725912Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792587048Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792681047Z" level=info msg="Start recovering state"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792879967Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792982622Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.827708066Z" level=info msg="Start event monitor"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.827890403Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.827954912Z" level=info msg="Start streaming server"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828030688Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828089839Z" level=info msg="runtime interface starting up..."
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828151371Z" level=info msg="starting plugins..."
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828234219Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:51:20 newest-cni-237317 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.830614962Z" level=info msg="containerd successfully booted in 0.079173s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:27.184136   13402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:27.184553   13402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:27.186074   13402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:27.186411   13402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:27.187941   13402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:57:27 up  6:39,  0 user,  load average: 0.85, 0.69, 1.24
	Linux newest-cni-237317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:57:24 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:24 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 10 07:57:24 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:24 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:24 newest-cni-237317 kubelet[13279]: E1210 07:57:24.872727   13279 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:24 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:24 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:25 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 10 07:57:25 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:25 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:25 newest-cni-237317 kubelet[13299]: E1210 07:57:25.599098   13299 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:25 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:25 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:26 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 10 07:57:26 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:26 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:26 newest-cni-237317 kubelet[13304]: E1210 07:57:26.354300   13304 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:26 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:26 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:27 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 10 07:57:27 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:27 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:27 newest-cni-237317 kubelet[13383]: E1210 07:57:27.119604   13383 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:27 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:27 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (333.69344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-237317" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (374.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:58:06.072193  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:59:16.545745  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:59:57.496279  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:00:14.423745  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:00:24.250369  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:00:39.614127  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:01:43.005907  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:01:47.324412  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:02:35.782905  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1210 08:03:53.466011  786751 config.go:182] Loaded profile config "custom-flannel-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:04:03.941028  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:04:03.947410  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:04:03.958933  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:04:03.980447  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:04:04.021993  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:04:04.103457  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:04:04.264809  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:04:04.586579  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:04:05.228222  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:04:06.509633  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:04:09.071446  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:04:14.194082  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:04:16.546432  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:04:44.918374  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:05:14.423930  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:05:24.251006  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:05:25.880694  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:05:40.233787  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:05:40.240217  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:05:40.251669  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:05:40.273159  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:05:40.314546  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:05:40.395933  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:05:40.557490  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:05:40.879128  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:05:41.521075  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:05:42.802837  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:05:45.364845  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:05:50.486880  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:06:00.728341  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:06:21.210176  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009: exit status 2 (468.615847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-587009" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-587009
helpers_test.go:244: (dbg) docker inspect no-preload-587009:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	        "Created": "2025-12-10T07:40:57.013300176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1077472,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:51:10.781643992Z",
	            "FinishedAt": "2025-12-10T07:51:09.433560094Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hosts",
	        "LogPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf-json.log",
	        "Name": "/no-preload-587009",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-587009:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-587009",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	                "LowerDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-587009",
	                "Source": "/var/lib/docker/volumes/no-preload-587009/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-587009",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-587009",
	                "name.minikube.sigs.k8s.io": "no-preload-587009",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3027da22b232bea75e393d2b661101d643e6e04216f3ba2ece99c7a84ae4f2ee",
	            "SandboxKey": "/var/run/docker/netns/3027da22b232",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-587009": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:01:16:c7:75:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93aee88a5c37e6ba01b74d7794f328193d01cfce9cb66379ab00b3f3e2e73a48",
	                    "EndpointID": "4717ce896d8375f79b53590f55b234cfc29918d126a12ae9fa574429e9722162",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-587009",
	                        "59d9b9413fb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009: exit status 2 (413.510429ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-587009 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-587009 logs -n 25: (1.076830645s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-945825 sudo iptables -t nat -L -n -v                                 │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo systemctl status kubelet --all --full --no-pager         │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo systemctl cat kubelet --no-pager                         │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo journalctl -xeu kubelet --all --full --no-pager          │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo cat /etc/kubernetes/kubelet.conf                         │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo cat /var/lib/kubelet/config.yaml                         │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo systemctl status docker --all --full --no-pager          │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │                     │
	│ ssh     │ -p kindnet-945825 sudo systemctl cat docker --no-pager                          │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo cat /etc/docker/daemon.json                              │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │                     │
	│ ssh     │ -p kindnet-945825 sudo docker system info                                       │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │                     │
	│ ssh     │ -p kindnet-945825 sudo systemctl status cri-docker --all --full --no-pager      │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │                     │
	│ ssh     │ -p kindnet-945825 sudo systemctl cat cri-docker --no-pager                      │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │                     │
	│ ssh     │ -p kindnet-945825 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo cri-dockerd --version                                    │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo systemctl status containerd --all --full --no-pager      │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo systemctl cat containerd --no-pager                      │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo cat /lib/systemd/system/containerd.service               │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo cat /etc/containerd/config.toml                          │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo containerd config dump                                   │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo systemctl status crio --all --full --no-pager            │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │                     │
	│ ssh     │ -p kindnet-945825 sudo systemctl cat crio --no-pager                            │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ ssh     │ -p kindnet-945825 sudo crio config                                              │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │ 10 Dec 25 08:06 UTC │
	│ delete  │ -p kindnet-945825                                                               │ kindnet-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 08:04:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 08:04:24.469228 1126792 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:04:24.469467 1126792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:04:24.469505 1126792 out.go:374] Setting ErrFile to fd 2...
	I1210 08:04:24.469525 1126792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:04:24.469870 1126792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 08:04:24.470394 1126792 out.go:368] Setting JSON to false
	I1210 08:04:24.471466 1126792 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24389,"bootTime":1765329476,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 08:04:24.471567 1126792 start.go:143] virtualization:  
	I1210 08:04:24.476027 1126792 out.go:179] * [kindnet-945825] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 08:04:24.480842 1126792 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:04:24.480912 1126792 notify.go:221] Checking for updates...
	I1210 08:04:24.487862 1126792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:04:24.491346 1126792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 08:04:24.494629 1126792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 08:04:24.497927 1126792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:04:24.501141 1126792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:04:24.504785 1126792 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 08:04:24.504910 1126792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:04:24.538409 1126792 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:04:24.538593 1126792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:04:24.626886 1126792 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:04:24.61713319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:04:24.626994 1126792 docker.go:319] overlay module found
	I1210 08:04:24.630426 1126792 out.go:179] * Using the docker driver based on user configuration
	I1210 08:04:24.633388 1126792 start.go:309] selected driver: docker
	I1210 08:04:24.633405 1126792 start.go:927] validating driver "docker" against <nil>
	I1210 08:04:24.633420 1126792 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:04:24.634176 1126792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:04:24.693966 1126792 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:04:24.684568645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:04:24.694121 1126792 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 08:04:24.694361 1126792 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 08:04:24.697531 1126792 out.go:179] * Using Docker driver with root privileges
	I1210 08:04:24.700622 1126792 cni.go:84] Creating CNI manager for "kindnet"
	I1210 08:04:24.700654 1126792 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 08:04:24.700751 1126792 start.go:353] cluster config:
	{Name:kindnet-945825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-945825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:04:24.705750 1126792 out.go:179] * Starting "kindnet-945825" primary control-plane node in "kindnet-945825" cluster
	I1210 08:04:24.708666 1126792 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 08:04:24.711583 1126792 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 08:04:24.714452 1126792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 08:04:24.714496 1126792 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1210 08:04:24.714534 1126792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1210 08:04:24.714543 1126792 cache.go:65] Caching tarball of preloaded images
	I1210 08:04:24.714622 1126792 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 08:04:24.714631 1126792 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1210 08:04:24.714740 1126792 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/config.json ...
	I1210 08:04:24.714758 1126792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/config.json: {Name:mk02efdb49ab98258cdb4a1d5a0a33cd7307237a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:04:24.737504 1126792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 08:04:24.737528 1126792 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 08:04:24.737549 1126792 cache.go:243] Successfully downloaded all kic artifacts
	I1210 08:04:24.737586 1126792 start.go:360] acquireMachinesLock for kindnet-945825: {Name:mk0f1d7558b8424a8af16706e89681eb2d0dec1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 08:04:24.737703 1126792 start.go:364] duration metric: took 96.609µs to acquireMachinesLock for "kindnet-945825"
	I1210 08:04:24.737735 1126792 start.go:93] Provisioning new machine with config: &{Name:kindnet-945825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-945825 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 08:04:24.737809 1126792 start.go:125] createHost starting for "" (driver="docker")
	I1210 08:04:24.741337 1126792 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 08:04:24.741588 1126792 start.go:159] libmachine.API.Create for "kindnet-945825" (driver="docker")
	I1210 08:04:24.741622 1126792 client.go:173] LocalClient.Create starting
	I1210 08:04:24.741691 1126792 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 08:04:24.741731 1126792 main.go:143] libmachine: Decoding PEM data...
	I1210 08:04:24.741753 1126792 main.go:143] libmachine: Parsing certificate...
	I1210 08:04:24.741822 1126792 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 08:04:24.741844 1126792 main.go:143] libmachine: Decoding PEM data...
	I1210 08:04:24.741856 1126792 main.go:143] libmachine: Parsing certificate...
	I1210 08:04:24.742236 1126792 cli_runner.go:164] Run: docker network inspect kindnet-945825 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 08:04:24.758575 1126792 cli_runner.go:211] docker network inspect kindnet-945825 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 08:04:24.758662 1126792 network_create.go:284] running [docker network inspect kindnet-945825] to gather additional debugging logs...
	I1210 08:04:24.758683 1126792 cli_runner.go:164] Run: docker network inspect kindnet-945825
	W1210 08:04:24.774923 1126792 cli_runner.go:211] docker network inspect kindnet-945825 returned with exit code 1
	I1210 08:04:24.774959 1126792 network_create.go:287] error running [docker network inspect kindnet-945825]: docker network inspect kindnet-945825: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-945825 not found
	I1210 08:04:24.774973 1126792 network_create.go:289] output of [docker network inspect kindnet-945825]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-945825 not found
	
	** /stderr **
	I1210 08:04:24.775082 1126792 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 08:04:24.791469 1126792 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 08:04:24.791828 1126792 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 08:04:24.792177 1126792 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 08:04:24.792635 1126792 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a345e0}
	I1210 08:04:24.792659 1126792 network_create.go:124] attempt to create docker network kindnet-945825 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 08:04:24.792715 1126792 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-945825 kindnet-945825
	I1210 08:04:24.847470 1126792 network_create.go:108] docker network kindnet-945825 192.168.76.0/24 created
	I1210 08:04:24.847505 1126792 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-945825" container
	I1210 08:04:24.847593 1126792 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 08:04:24.865243 1126792 cli_runner.go:164] Run: docker volume create kindnet-945825 --label name.minikube.sigs.k8s.io=kindnet-945825 --label created_by.minikube.sigs.k8s.io=true
	I1210 08:04:24.883678 1126792 oci.go:103] Successfully created a docker volume kindnet-945825
	I1210 08:04:24.883788 1126792 cli_runner.go:164] Run: docker run --rm --name kindnet-945825-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-945825 --entrypoint /usr/bin/test -v kindnet-945825:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 08:04:25.414963 1126792 oci.go:107] Successfully prepared a docker volume kindnet-945825
	I1210 08:04:25.415043 1126792 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1210 08:04:25.415061 1126792 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 08:04:25.415149 1126792 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-945825:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 08:04:29.373926 1126792 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-945825:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.958735804s)
	I1210 08:04:29.373963 1126792 kic.go:203] duration metric: took 3.958898202s to extract preloaded images to volume ...
	W1210 08:04:29.374113 1126792 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 08:04:29.374228 1126792 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 08:04:29.431815 1126792 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-945825 --name kindnet-945825 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-945825 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-945825 --network kindnet-945825 --ip 192.168.76.2 --volume kindnet-945825:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 08:04:29.742233 1126792 cli_runner.go:164] Run: docker container inspect kindnet-945825 --format={{.State.Running}}
	I1210 08:04:29.766906 1126792 cli_runner.go:164] Run: docker container inspect kindnet-945825 --format={{.State.Status}}
	I1210 08:04:29.790150 1126792 cli_runner.go:164] Run: docker exec kindnet-945825 stat /var/lib/dpkg/alternatives/iptables
	I1210 08:04:29.860760 1126792 oci.go:144] the created container "kindnet-945825" has a running status.
	I1210 08:04:29.860796 1126792 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/kindnet-945825/id_rsa...
	I1210 08:04:29.963394 1126792 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/kindnet-945825/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 08:04:29.989790 1126792 cli_runner.go:164] Run: docker container inspect kindnet-945825 --format={{.State.Status}}
	I1210 08:04:30.014046 1126792 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 08:04:30.014071 1126792 kic_runner.go:114] Args: [docker exec --privileged kindnet-945825 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 08:04:30.100905 1126792 cli_runner.go:164] Run: docker container inspect kindnet-945825 --format={{.State.Status}}
	I1210 08:04:30.133654 1126792 machine.go:94] provisionDockerMachine start ...
	I1210 08:04:30.133773 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:04:30.160410 1126792 main.go:143] libmachine: Using SSH client type: native
	I1210 08:04:30.160857 1126792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33870 <nil> <nil>}
	I1210 08:04:30.160871 1126792 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 08:04:30.161631 1126792 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 08:04:33.298649 1126792 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-945825
	
	I1210 08:04:33.298678 1126792 ubuntu.go:182] provisioning hostname "kindnet-945825"
	I1210 08:04:33.298801 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:04:33.317141 1126792 main.go:143] libmachine: Using SSH client type: native
	I1210 08:04:33.317465 1126792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33870 <nil> <nil>}
	I1210 08:04:33.317482 1126792 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-945825 && echo "kindnet-945825" | sudo tee /etc/hostname
	I1210 08:04:33.464055 1126792 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-945825
	
	I1210 08:04:33.464138 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:04:33.481770 1126792 main.go:143] libmachine: Using SSH client type: native
	I1210 08:04:33.482167 1126792 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33870 <nil> <nil>}
	I1210 08:04:33.482199 1126792 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-945825' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-945825/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-945825' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 08:04:33.618786 1126792 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 08:04:33.618879 1126792 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 08:04:33.618929 1126792 ubuntu.go:190] setting up certificates
	I1210 08:04:33.618969 1126792 provision.go:84] configureAuth start
	I1210 08:04:33.619051 1126792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-945825
	I1210 08:04:33.636340 1126792 provision.go:143] copyHostCerts
	I1210 08:04:33.636407 1126792 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 08:04:33.636416 1126792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 08:04:33.636583 1126792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 08:04:33.636696 1126792 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 08:04:33.636702 1126792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 08:04:33.636733 1126792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 08:04:33.636818 1126792 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 08:04:33.636823 1126792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 08:04:33.636852 1126792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 08:04:33.636912 1126792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.kindnet-945825 san=[127.0.0.1 192.168.76.2 kindnet-945825 localhost minikube]
	I1210 08:04:33.708748 1126792 provision.go:177] copyRemoteCerts
	I1210 08:04:33.708828 1126792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 08:04:33.708869 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:04:33.726582 1126792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kindnet-945825/id_rsa Username:docker}
	I1210 08:04:33.822458 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1210 08:04:33.840559 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 08:04:33.858554 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 08:04:33.876879 1126792 provision.go:87] duration metric: took 257.877662ms to configureAuth
	I1210 08:04:33.876912 1126792 ubuntu.go:206] setting minikube options for container-runtime
	I1210 08:04:33.877122 1126792 config.go:182] Loaded profile config "kindnet-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 08:04:33.877137 1126792 machine.go:97] duration metric: took 3.743462136s to provisionDockerMachine
	I1210 08:04:33.877144 1126792 client.go:176] duration metric: took 9.1355118s to LocalClient.Create
	I1210 08:04:33.877159 1126792 start.go:167] duration metric: took 9.13557233s to libmachine.API.Create "kindnet-945825"
	I1210 08:04:33.877174 1126792 start.go:293] postStartSetup for "kindnet-945825" (driver="docker")
	I1210 08:04:33.877188 1126792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 08:04:33.877249 1126792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 08:04:33.877292 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:04:33.895890 1126792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kindnet-945825/id_rsa Username:docker}
	I1210 08:04:33.994768 1126792 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 08:04:33.998273 1126792 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 08:04:33.998299 1126792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 08:04:33.998311 1126792 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 08:04:33.998373 1126792 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 08:04:33.998452 1126792 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 08:04:33.998583 1126792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 08:04:34.007920 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 08:04:34.028078 1126792 start.go:296] duration metric: took 150.88468ms for postStartSetup
	I1210 08:04:34.028467 1126792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-945825
	I1210 08:04:34.048389 1126792 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/config.json ...
	I1210 08:04:34.048694 1126792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:04:34.048748 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:04:34.066907 1126792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kindnet-945825/id_rsa Username:docker}
	I1210 08:04:34.164016 1126792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 08:04:34.169032 1126792 start.go:128] duration metric: took 9.431205526s to createHost
	I1210 08:04:34.169058 1126792 start.go:83] releasing machines lock for "kindnet-945825", held for 9.431341569s
	I1210 08:04:34.169160 1126792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-945825
	I1210 08:04:34.186631 1126792 ssh_runner.go:195] Run: cat /version.json
	I1210 08:04:34.186669 1126792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 08:04:34.186700 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:04:34.186733 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:04:34.212396 1126792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kindnet-945825/id_rsa Username:docker}
	I1210 08:04:34.213967 1126792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kindnet-945825/id_rsa Username:docker}
	I1210 08:04:34.311029 1126792 ssh_runner.go:195] Run: systemctl --version
	I1210 08:04:34.415236 1126792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 08:04:34.419963 1126792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 08:04:34.420036 1126792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 08:04:34.451473 1126792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 08:04:34.451499 1126792 start.go:496] detecting cgroup driver to use...
	I1210 08:04:34.451543 1126792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 08:04:34.451604 1126792 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 08:04:34.467472 1126792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 08:04:34.481521 1126792 docker.go:218] disabling cri-docker service (if available) ...
	I1210 08:04:34.481611 1126792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 08:04:34.500071 1126792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 08:04:34.519810 1126792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 08:04:34.645659 1126792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 08:04:34.775258 1126792 docker.go:234] disabling docker service ...
	I1210 08:04:34.775353 1126792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 08:04:34.799146 1126792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 08:04:34.813979 1126792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 08:04:34.931533 1126792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 08:04:35.053215 1126792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 08:04:35.067812 1126792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 08:04:35.082741 1126792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 08:04:35.092436 1126792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 08:04:35.103346 1126792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 08:04:35.103466 1126792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 08:04:35.112801 1126792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 08:04:35.124339 1126792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 08:04:35.133931 1126792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 08:04:35.144421 1126792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 08:04:35.153450 1126792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 08:04:35.163378 1126792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 08:04:35.173002 1126792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 08:04:35.183401 1126792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 08:04:35.191742 1126792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 08:04:35.199995 1126792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:04:35.322361 1126792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 08:04:35.473186 1126792 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 08:04:35.473263 1126792 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 08:04:35.477443 1126792 start.go:564] Will wait 60s for crictl version
	I1210 08:04:35.477511 1126792 ssh_runner.go:195] Run: which crictl
	I1210 08:04:35.481255 1126792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 08:04:35.507540 1126792 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 08:04:35.507615 1126792 ssh_runner.go:195] Run: containerd --version
	I1210 08:04:35.531582 1126792 ssh_runner.go:195] Run: containerd --version
	I1210 08:04:35.559334 1126792 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1210 08:04:35.562545 1126792 cli_runner.go:164] Run: docker network inspect kindnet-945825 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 08:04:35.578767 1126792 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 08:04:35.582686 1126792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:04:35.592919 1126792 kubeadm.go:884] updating cluster {Name:kindnet-945825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-945825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 08:04:35.593059 1126792 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1210 08:04:35.593151 1126792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:04:35.619030 1126792 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 08:04:35.619057 1126792 containerd.go:534] Images already preloaded, skipping extraction
	I1210 08:04:35.619122 1126792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:04:35.645459 1126792 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 08:04:35.645486 1126792 cache_images.go:86] Images are preloaded, skipping loading
	I1210 08:04:35.645496 1126792 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1210 08:04:35.645587 1126792 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-945825 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-945825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1210 08:04:35.645660 1126792 ssh_runner.go:195] Run: sudo crictl info
	I1210 08:04:35.671142 1126792 cni.go:84] Creating CNI manager for "kindnet"
	I1210 08:04:35.671180 1126792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 08:04:35.671206 1126792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-945825 NodeName:kindnet-945825 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 08:04:35.671323 1126792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kindnet-945825"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 08:04:35.671398 1126792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 08:04:35.679529 1126792 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 08:04:35.679603 1126792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 08:04:35.687639 1126792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1210 08:04:35.701128 1126792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 08:04:35.714598 1126792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1210 08:04:35.727638 1126792 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 08:04:35.731298 1126792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:04:35.740959 1126792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:04:35.868387 1126792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 08:04:35.885327 1126792 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825 for IP: 192.168.76.2
	I1210 08:04:35.885356 1126792 certs.go:195] generating shared ca certs ...
	I1210 08:04:35.885374 1126792 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:04:35.885522 1126792 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 08:04:35.885576 1126792 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 08:04:35.885589 1126792 certs.go:257] generating profile certs ...
	I1210 08:04:35.885649 1126792 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.key
	I1210 08:04:35.885666 1126792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt with IP's: []
	I1210 08:04:36.123714 1126792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt ...
	I1210 08:04:36.123749 1126792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: {Name:mk2c69ac3775432e7e4fa0008b48a59eb6881219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:04:36.123957 1126792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.key ...
	I1210 08:04:36.123975 1126792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.key: {Name:mk6743544b23abb1ee178d519a84306a76abcfed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:04:36.124076 1126792 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.key.776409b5
	I1210 08:04:36.124095 1126792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.crt.776409b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 08:04:36.254358 1126792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.crt.776409b5 ...
	I1210 08:04:36.254392 1126792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.crt.776409b5: {Name:mk920cce0c0579f598e283e741402e750f6b9e88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:04:36.254597 1126792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.key.776409b5 ...
	I1210 08:04:36.254615 1126792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.key.776409b5: {Name:mk189a3e4459a8d9ba90b59c8f55a68c1d49f8f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:04:36.254707 1126792 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.crt.776409b5 -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.crt
	I1210 08:04:36.254791 1126792 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.key.776409b5 -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.key
	I1210 08:04:36.254862 1126792 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/proxy-client.key
	I1210 08:04:36.254882 1126792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/proxy-client.crt with IP's: []
	I1210 08:04:36.515069 1126792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/proxy-client.crt ...
	I1210 08:04:36.515113 1126792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/proxy-client.crt: {Name:mk77fb5c2b2c695b03b4cac867d89db011a98929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:04:36.515306 1126792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/proxy-client.key ...
	I1210 08:04:36.515320 1126792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/proxy-client.key: {Name:mk7dfb22305e12580ddac9a183ac0f519a81bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:04:36.515524 1126792 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 08:04:36.515572 1126792 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 08:04:36.515585 1126792 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 08:04:36.515613 1126792 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 08:04:36.515643 1126792 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 08:04:36.515670 1126792 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 08:04:36.515719 1126792 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 08:04:36.516292 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 08:04:36.537783 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 08:04:36.558969 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 08:04:36.578969 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 08:04:36.599007 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 08:04:36.618108 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 08:04:36.635621 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 08:04:36.653074 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 08:04:36.671841 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 08:04:36.689926 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 08:04:36.708342 1126792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 08:04:36.726292 1126792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 08:04:36.739068 1126792 ssh_runner.go:195] Run: openssl version
	I1210 08:04:36.745509 1126792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 08:04:36.753168 1126792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 08:04:36.761165 1126792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 08:04:36.764980 1126792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 08:04:36.765091 1126792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 08:04:36.807589 1126792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 08:04:36.816484 1126792 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 08:04:36.825932 1126792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:04:36.835021 1126792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 08:04:36.843898 1126792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:04:36.848862 1126792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:04:36.849007 1126792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:04:36.891645 1126792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 08:04:36.900143 1126792 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 08:04:36.907935 1126792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 08:04:36.915697 1126792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 08:04:36.923738 1126792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 08:04:36.927920 1126792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 08:04:36.927992 1126792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 08:04:36.970035 1126792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 08:04:36.978416 1126792 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 08:04:36.986086 1126792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 08:04:36.989944 1126792 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 08:04:36.990016 1126792 kubeadm.go:401] StartCluster: {Name:kindnet-945825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-945825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:04:36.990098 1126792 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 08:04:36.990161 1126792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 08:04:37.019172 1126792 cri.go:89] found id: ""
	I1210 08:04:37.019348 1126792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 08:04:37.029452 1126792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 08:04:37.038396 1126792 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 08:04:37.038632 1126792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 08:04:37.047402 1126792 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 08:04:37.047486 1126792 kubeadm.go:158] found existing configuration files:
	
	I1210 08:04:37.047565 1126792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 08:04:37.055651 1126792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 08:04:37.055767 1126792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 08:04:37.063350 1126792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 08:04:37.071066 1126792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 08:04:37.071157 1126792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 08:04:37.078861 1126792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 08:04:37.086753 1126792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 08:04:37.086824 1126792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 08:04:37.094501 1126792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 08:04:37.102300 1126792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 08:04:37.102422 1126792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 08:04:37.110135 1126792 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 08:04:37.150691 1126792 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 08:04:37.150758 1126792 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 08:04:37.173150 1126792 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 08:04:37.173224 1126792 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 08:04:37.173264 1126792 kubeadm.go:319] OS: Linux
	I1210 08:04:37.173311 1126792 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 08:04:37.173361 1126792 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 08:04:37.173410 1126792 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 08:04:37.173459 1126792 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 08:04:37.173507 1126792 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 08:04:37.173556 1126792 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 08:04:37.173603 1126792 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 08:04:37.173652 1126792 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 08:04:37.173709 1126792 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 08:04:37.245234 1126792 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 08:04:37.245381 1126792 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 08:04:37.245503 1126792 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 08:04:37.251483 1126792 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 08:04:37.258226 1126792 out.go:252]   - Generating certificates and keys ...
	I1210 08:04:37.258328 1126792 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 08:04:37.258410 1126792 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 08:04:37.773403 1126792 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 08:04:38.706635 1126792 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 08:04:39.045754 1126792 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 08:04:39.363473 1126792 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 08:04:40.199364 1126792 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 08:04:40.199799 1126792 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-945825 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 08:04:40.510434 1126792 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 08:04:40.510811 1126792 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-945825 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 08:04:41.429299 1126792 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 08:04:42.077316 1126792 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 08:04:42.847282 1126792 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 08:04:42.847677 1126792 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 08:04:43.961352 1126792 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 08:04:44.234283 1126792 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 08:04:44.986378 1126792 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 08:04:45.217311 1126792 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 08:04:46.751972 1126792 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 08:04:46.752570 1126792 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 08:04:46.755169 1126792 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 08:04:46.758789 1126792 out.go:252]   - Booting up control plane ...
	I1210 08:04:46.758894 1126792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 08:04:46.758972 1126792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 08:04:46.759046 1126792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 08:04:46.777425 1126792 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 08:04:46.777700 1126792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 08:04:46.786067 1126792 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 08:04:46.786164 1126792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 08:04:46.786201 1126792 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 08:04:46.921519 1126792 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 08:04:46.921640 1126792 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:04:48.426877 1126792 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502290783s
	I1210 08:04:48.428142 1126792 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 08:04:48.428363 1126792 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1210 08:04:48.428482 1126792 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 08:04:48.428788 1126792 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 08:04:52.772664 1126792 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.343552094s
	I1210 08:04:54.085046 1126792 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.655723257s
	I1210 08:04:55.930063 1126792 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.501487822s
	I1210 08:04:55.965308 1126792 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 08:04:55.980165 1126792 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 08:04:55.992001 1126792 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 08:04:55.992213 1126792 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-945825 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 08:04:56.007304 1126792 kubeadm.go:319] [bootstrap-token] Using token: 7h03m2.gw4dsvza8zfwsjy9
	I1210 08:04:56.010410 1126792 out.go:252]   - Configuring RBAC rules ...
	I1210 08:04:56.010564 1126792 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 08:04:56.017670 1126792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 08:04:56.030185 1126792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 08:04:56.035078 1126792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 08:04:56.059734 1126792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 08:04:56.071138 1126792 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 08:04:56.338133 1126792 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 08:04:56.770421 1126792 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 08:04:57.340417 1126792 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 08:04:57.342215 1126792 kubeadm.go:319] 
	I1210 08:04:57.342285 1126792 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 08:04:57.342304 1126792 kubeadm.go:319] 
	I1210 08:04:57.342383 1126792 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 08:04:57.342392 1126792 kubeadm.go:319] 
	I1210 08:04:57.342417 1126792 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 08:04:57.342512 1126792 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 08:04:57.342570 1126792 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 08:04:57.342582 1126792 kubeadm.go:319] 
	I1210 08:04:57.342636 1126792 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 08:04:57.342643 1126792 kubeadm.go:319] 
	I1210 08:04:57.342690 1126792 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 08:04:57.342698 1126792 kubeadm.go:319] 
	I1210 08:04:57.342750 1126792 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 08:04:57.342828 1126792 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 08:04:57.342901 1126792 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 08:04:57.342909 1126792 kubeadm.go:319] 
	I1210 08:04:57.342994 1126792 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 08:04:57.343074 1126792 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 08:04:57.343082 1126792 kubeadm.go:319] 
	I1210 08:04:57.343166 1126792 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7h03m2.gw4dsvza8zfwsjy9 \
	I1210 08:04:57.343275 1126792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e9f3cb78cb77d4f01fb49055e1f2de1580fc701c72db340d5c15a42a39b8dd0 \
	I1210 08:04:57.343301 1126792 kubeadm.go:319] 	--control-plane 
	I1210 08:04:57.343312 1126792 kubeadm.go:319] 
	I1210 08:04:57.343397 1126792 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 08:04:57.343406 1126792 kubeadm.go:319] 
	I1210 08:04:57.343488 1126792 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7h03m2.gw4dsvza8zfwsjy9 \
	I1210 08:04:57.343593 1126792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e9f3cb78cb77d4f01fb49055e1f2de1580fc701c72db340d5c15a42a39b8dd0 
	I1210 08:04:57.348250 1126792 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 08:04:57.348467 1126792 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:04:57.348571 1126792 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:04:57.348591 1126792 cni.go:84] Creating CNI manager for "kindnet"
	I1210 08:04:57.351755 1126792 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 08:04:57.354653 1126792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 08:04:57.359008 1126792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 08:04:57.359029 1126792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 08:04:57.373981 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 08:04:57.714515 1126792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 08:04:57.714649 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:04:57.714733 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-945825 minikube.k8s.io/updated_at=2025_12_10T08_04_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=kindnet-945825 minikube.k8s.io/primary=true
	I1210 08:04:57.887798 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:04:57.887861 1126792 ops.go:34] apiserver oom_adj: -16
	I1210 08:04:58.388377 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:04:58.887973 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:04:59.388178 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:04:59.887970 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:05:00.391643 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:05:00.887834 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:05:01.388108 1126792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:05:01.482777 1126792 kubeadm.go:1114] duration metric: took 3.768171758s to wait for elevateKubeSystemPrivileges
	I1210 08:05:01.482815 1126792 kubeadm.go:403] duration metric: took 24.492803477s to StartCluster
	I1210 08:05:01.482836 1126792 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:05:01.482907 1126792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 08:05:01.483846 1126792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:05:01.484077 1126792 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 08:05:01.484176 1126792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 08:05:01.484449 1126792 config.go:182] Loaded profile config "kindnet-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 08:05:01.484488 1126792 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 08:05:01.484554 1126792 addons.go:70] Setting storage-provisioner=true in profile "kindnet-945825"
	I1210 08:05:01.484568 1126792 addons.go:239] Setting addon storage-provisioner=true in "kindnet-945825"
	I1210 08:05:01.484611 1126792 host.go:66] Checking if "kindnet-945825" exists ...
	I1210 08:05:01.485113 1126792 cli_runner.go:164] Run: docker container inspect kindnet-945825 --format={{.State.Status}}
	I1210 08:05:01.485767 1126792 addons.go:70] Setting default-storageclass=true in profile "kindnet-945825"
	I1210 08:05:01.485791 1126792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-945825"
	I1210 08:05:01.486142 1126792 cli_runner.go:164] Run: docker container inspect kindnet-945825 --format={{.State.Status}}
	I1210 08:05:01.489145 1126792 out.go:179] * Verifying Kubernetes components...
	I1210 08:05:01.495576 1126792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:05:01.518826 1126792 addons.go:239] Setting addon default-storageclass=true in "kindnet-945825"
	I1210 08:05:01.518874 1126792 host.go:66] Checking if "kindnet-945825" exists ...
	I1210 08:05:01.519315 1126792 cli_runner.go:164] Run: docker container inspect kindnet-945825 --format={{.State.Status}}
	I1210 08:05:01.524726 1126792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 08:05:01.527928 1126792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 08:05:01.527955 1126792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 08:05:01.528025 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:05:01.565287 1126792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kindnet-945825/id_rsa Username:docker}
	I1210 08:05:01.571528 1126792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 08:05:01.571550 1126792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 08:05:01.571621 1126792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-945825
	I1210 08:05:01.602641 1126792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33870 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/kindnet-945825/id_rsa Username:docker}
	I1210 08:05:01.760908 1126792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 08:05:01.805112 1126792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 08:05:01.952609 1126792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 08:05:01.993269 1126792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 08:05:02.476520 1126792 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1210 08:05:02.479132 1126792 node_ready.go:35] waiting up to 15m0s for node "kindnet-945825" to be "Ready" ...
	I1210 08:05:02.983368 1126792 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-945825" context rescaled to 1 replicas
	I1210 08:05:02.998978 1126792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.005632401s)
	I1210 08:05:03.002842 1126792 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 08:05:03.005538 1126792 addons.go:530] duration metric: took 1.521034247s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1210 08:05:04.482391 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:06.482751 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:08.982609 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:11.482787 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:13.982372 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:16.482375 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:18.482962 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:20.983688 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:23.481924 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:25.483012 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:27.982875 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:30.482559 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:32.983139 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:35.482332 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:37.982256 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:40.482212 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	W1210 08:05:42.482308 1126792 node_ready.go:57] node "kindnet-945825" has "Ready":"False" status (will retry)
	I1210 08:05:43.482351 1126792 node_ready.go:49] node "kindnet-945825" is "Ready"
	I1210 08:05:43.482385 1126792 node_ready.go:38] duration metric: took 41.003164436s for node "kindnet-945825" to be "Ready" ...
	I1210 08:05:43.482398 1126792 api_server.go:52] waiting for apiserver process to appear ...
	I1210 08:05:43.482513 1126792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:05:43.495232 1126792 api_server.go:72] duration metric: took 42.011115564s to wait for apiserver process to appear ...
	I1210 08:05:43.495259 1126792 api_server.go:88] waiting for apiserver healthz status ...
	I1210 08:05:43.495279 1126792 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:05:43.503985 1126792 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 08:05:43.505187 1126792 api_server.go:141] control plane version: v1.34.2
	I1210 08:05:43.505242 1126792 api_server.go:131] duration metric: took 9.97519ms to wait for apiserver health ...
	I1210 08:05:43.505252 1126792 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 08:05:43.511744 1126792 system_pods.go:59] 8 kube-system pods found
	I1210 08:05:43.511816 1126792 system_pods.go:61] "coredns-66bc5c9577-qbg99" [b3648786-f1e0-4970-ad9b-b3a5a9d2f979] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:05:43.511824 1126792 system_pods.go:61] "etcd-kindnet-945825" [033eb9dc-bdf3-4b94-a0e9-f864c476dcc8] Running
	I1210 08:05:43.511831 1126792 system_pods.go:61] "kindnet-nw24j" [630b0c33-78c4-4040-871c-a94d16497f7a] Running
	I1210 08:05:43.511835 1126792 system_pods.go:61] "kube-apiserver-kindnet-945825" [7e5618ed-976b-4cd0-91eb-cd8a5a89d3c6] Running
	I1210 08:05:43.511846 1126792 system_pods.go:61] "kube-controller-manager-kindnet-945825" [c0653c38-92c9-4779-ae82-8a452e092d7e] Running
	I1210 08:05:43.511852 1126792 system_pods.go:61] "kube-proxy-smnb9" [dd6469ae-752c-4665-b6a6-f13ae63a92a9] Running
	I1210 08:05:43.511863 1126792 system_pods.go:61] "kube-scheduler-kindnet-945825" [87aa7483-0763-409f-8efb-4fffd5dfa633] Running
	I1210 08:05:43.511881 1126792 system_pods.go:61] "storage-provisioner" [f401757c-b89f-41a6-8f56-418928eb86de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 08:05:43.511887 1126792 system_pods.go:74] duration metric: took 6.62893ms to wait for pod list to return data ...
	I1210 08:05:43.511906 1126792 default_sa.go:34] waiting for default service account to be created ...
	I1210 08:05:43.519837 1126792 default_sa.go:45] found service account: "default"
	I1210 08:05:43.519867 1126792 default_sa.go:55] duration metric: took 7.94935ms for default service account to be created ...
	I1210 08:05:43.519887 1126792 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 08:05:43.538165 1126792 system_pods.go:86] 8 kube-system pods found
	I1210 08:05:43.538204 1126792 system_pods.go:89] "coredns-66bc5c9577-qbg99" [b3648786-f1e0-4970-ad9b-b3a5a9d2f979] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:05:43.538212 1126792 system_pods.go:89] "etcd-kindnet-945825" [033eb9dc-bdf3-4b94-a0e9-f864c476dcc8] Running
	I1210 08:05:43.538220 1126792 system_pods.go:89] "kindnet-nw24j" [630b0c33-78c4-4040-871c-a94d16497f7a] Running
	I1210 08:05:43.538225 1126792 system_pods.go:89] "kube-apiserver-kindnet-945825" [7e5618ed-976b-4cd0-91eb-cd8a5a89d3c6] Running
	I1210 08:05:43.538230 1126792 system_pods.go:89] "kube-controller-manager-kindnet-945825" [c0653c38-92c9-4779-ae82-8a452e092d7e] Running
	I1210 08:05:43.538235 1126792 system_pods.go:89] "kube-proxy-smnb9" [dd6469ae-752c-4665-b6a6-f13ae63a92a9] Running
	I1210 08:05:43.538239 1126792 system_pods.go:89] "kube-scheduler-kindnet-945825" [87aa7483-0763-409f-8efb-4fffd5dfa633] Running
	I1210 08:05:43.538245 1126792 system_pods.go:89] "storage-provisioner" [f401757c-b89f-41a6-8f56-418928eb86de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 08:05:43.538273 1126792 retry.go:31] will retry after 306.670327ms: missing components: kube-dns
	I1210 08:05:43.849277 1126792 system_pods.go:86] 8 kube-system pods found
	I1210 08:05:43.849308 1126792 system_pods.go:89] "coredns-66bc5c9577-qbg99" [b3648786-f1e0-4970-ad9b-b3a5a9d2f979] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:05:43.849314 1126792 system_pods.go:89] "etcd-kindnet-945825" [033eb9dc-bdf3-4b94-a0e9-f864c476dcc8] Running
	I1210 08:05:43.849323 1126792 system_pods.go:89] "kindnet-nw24j" [630b0c33-78c4-4040-871c-a94d16497f7a] Running
	I1210 08:05:43.849327 1126792 system_pods.go:89] "kube-apiserver-kindnet-945825" [7e5618ed-976b-4cd0-91eb-cd8a5a89d3c6] Running
	I1210 08:05:43.849331 1126792 system_pods.go:89] "kube-controller-manager-kindnet-945825" [c0653c38-92c9-4779-ae82-8a452e092d7e] Running
	I1210 08:05:43.849335 1126792 system_pods.go:89] "kube-proxy-smnb9" [dd6469ae-752c-4665-b6a6-f13ae63a92a9] Running
	I1210 08:05:43.849339 1126792 system_pods.go:89] "kube-scheduler-kindnet-945825" [87aa7483-0763-409f-8efb-4fffd5dfa633] Running
	I1210 08:05:43.849344 1126792 system_pods.go:89] "storage-provisioner" [f401757c-b89f-41a6-8f56-418928eb86de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 08:05:43.849359 1126792 retry.go:31] will retry after 316.962311ms: missing components: kube-dns
	I1210 08:05:44.179482 1126792 system_pods.go:86] 8 kube-system pods found
	I1210 08:05:44.179524 1126792 system_pods.go:89] "coredns-66bc5c9577-qbg99" [b3648786-f1e0-4970-ad9b-b3a5a9d2f979] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:05:44.179531 1126792 system_pods.go:89] "etcd-kindnet-945825" [033eb9dc-bdf3-4b94-a0e9-f864c476dcc8] Running
	I1210 08:05:44.179543 1126792 system_pods.go:89] "kindnet-nw24j" [630b0c33-78c4-4040-871c-a94d16497f7a] Running
	I1210 08:05:44.179547 1126792 system_pods.go:89] "kube-apiserver-kindnet-945825" [7e5618ed-976b-4cd0-91eb-cd8a5a89d3c6] Running
	I1210 08:05:44.179551 1126792 system_pods.go:89] "kube-controller-manager-kindnet-945825" [c0653c38-92c9-4779-ae82-8a452e092d7e] Running
	I1210 08:05:44.179556 1126792 system_pods.go:89] "kube-proxy-smnb9" [dd6469ae-752c-4665-b6a6-f13ae63a92a9] Running
	I1210 08:05:44.179560 1126792 system_pods.go:89] "kube-scheduler-kindnet-945825" [87aa7483-0763-409f-8efb-4fffd5dfa633] Running
	I1210 08:05:44.179566 1126792 system_pods.go:89] "storage-provisioner" [f401757c-b89f-41a6-8f56-418928eb86de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 08:05:44.179580 1126792 retry.go:31] will retry after 404.149424ms: missing components: kube-dns
	I1210 08:05:44.587795 1126792 system_pods.go:86] 8 kube-system pods found
	I1210 08:05:44.587828 1126792 system_pods.go:89] "coredns-66bc5c9577-qbg99" [b3648786-f1e0-4970-ad9b-b3a5a9d2f979] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:05:44.587836 1126792 system_pods.go:89] "etcd-kindnet-945825" [033eb9dc-bdf3-4b94-a0e9-f864c476dcc8] Running
	I1210 08:05:44.587843 1126792 system_pods.go:89] "kindnet-nw24j" [630b0c33-78c4-4040-871c-a94d16497f7a] Running
	I1210 08:05:44.587847 1126792 system_pods.go:89] "kube-apiserver-kindnet-945825" [7e5618ed-976b-4cd0-91eb-cd8a5a89d3c6] Running
	I1210 08:05:44.587851 1126792 system_pods.go:89] "kube-controller-manager-kindnet-945825" [c0653c38-92c9-4779-ae82-8a452e092d7e] Running
	I1210 08:05:44.587858 1126792 system_pods.go:89] "kube-proxy-smnb9" [dd6469ae-752c-4665-b6a6-f13ae63a92a9] Running
	I1210 08:05:44.587862 1126792 system_pods.go:89] "kube-scheduler-kindnet-945825" [87aa7483-0763-409f-8efb-4fffd5dfa633] Running
	I1210 08:05:44.587868 1126792 system_pods.go:89] "storage-provisioner" [f401757c-b89f-41a6-8f56-418928eb86de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 08:05:44.587892 1126792 retry.go:31] will retry after 383.529576ms: missing components: kube-dns
	I1210 08:05:44.975726 1126792 system_pods.go:86] 8 kube-system pods found
	I1210 08:05:44.975764 1126792 system_pods.go:89] "coredns-66bc5c9577-qbg99" [b3648786-f1e0-4970-ad9b-b3a5a9d2f979] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:05:44.975771 1126792 system_pods.go:89] "etcd-kindnet-945825" [033eb9dc-bdf3-4b94-a0e9-f864c476dcc8] Running
	I1210 08:05:44.975778 1126792 system_pods.go:89] "kindnet-nw24j" [630b0c33-78c4-4040-871c-a94d16497f7a] Running
	I1210 08:05:44.975782 1126792 system_pods.go:89] "kube-apiserver-kindnet-945825" [7e5618ed-976b-4cd0-91eb-cd8a5a89d3c6] Running
	I1210 08:05:44.975786 1126792 system_pods.go:89] "kube-controller-manager-kindnet-945825" [c0653c38-92c9-4779-ae82-8a452e092d7e] Running
	I1210 08:05:44.975790 1126792 system_pods.go:89] "kube-proxy-smnb9" [dd6469ae-752c-4665-b6a6-f13ae63a92a9] Running
	I1210 08:05:44.975794 1126792 system_pods.go:89] "kube-scheduler-kindnet-945825" [87aa7483-0763-409f-8efb-4fffd5dfa633] Running
	I1210 08:05:44.975801 1126792 system_pods.go:89] "storage-provisioner" [f401757c-b89f-41a6-8f56-418928eb86de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 08:05:44.975815 1126792 retry.go:31] will retry after 718.882263ms: missing components: kube-dns
	I1210 08:05:45.699410 1126792 system_pods.go:86] 8 kube-system pods found
	I1210 08:05:45.699445 1126792 system_pods.go:89] "coredns-66bc5c9577-qbg99" [b3648786-f1e0-4970-ad9b-b3a5a9d2f979] Running
	I1210 08:05:45.699453 1126792 system_pods.go:89] "etcd-kindnet-945825" [033eb9dc-bdf3-4b94-a0e9-f864c476dcc8] Running
	I1210 08:05:45.699458 1126792 system_pods.go:89] "kindnet-nw24j" [630b0c33-78c4-4040-871c-a94d16497f7a] Running
	I1210 08:05:45.699462 1126792 system_pods.go:89] "kube-apiserver-kindnet-945825" [7e5618ed-976b-4cd0-91eb-cd8a5a89d3c6] Running
	I1210 08:05:45.699467 1126792 system_pods.go:89] "kube-controller-manager-kindnet-945825" [c0653c38-92c9-4779-ae82-8a452e092d7e] Running
	I1210 08:05:45.699472 1126792 system_pods.go:89] "kube-proxy-smnb9" [dd6469ae-752c-4665-b6a6-f13ae63a92a9] Running
	I1210 08:05:45.699476 1126792 system_pods.go:89] "kube-scheduler-kindnet-945825" [87aa7483-0763-409f-8efb-4fffd5dfa633] Running
	I1210 08:05:45.699480 1126792 system_pods.go:89] "storage-provisioner" [f401757c-b89f-41a6-8f56-418928eb86de] Running
	I1210 08:05:45.699487 1126792 system_pods.go:126] duration metric: took 2.179594822s to wait for k8s-apps to be running ...
	I1210 08:05:45.699500 1126792 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 08:05:45.699559 1126792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:05:45.713452 1126792 system_svc.go:56] duration metric: took 13.936777ms WaitForService to wait for kubelet
	I1210 08:05:45.713484 1126792 kubeadm.go:587] duration metric: took 44.229373328s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 08:05:45.713504 1126792 node_conditions.go:102] verifying NodePressure condition ...
	I1210 08:05:45.717408 1126792 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 08:05:45.717443 1126792 node_conditions.go:123] node cpu capacity is 2
	I1210 08:05:45.717458 1126792 node_conditions.go:105] duration metric: took 3.948804ms to run NodePressure ...
	I1210 08:05:45.717471 1126792 start.go:242] waiting for startup goroutines ...
	I1210 08:05:45.717478 1126792 start.go:247] waiting for cluster config update ...
	I1210 08:05:45.717491 1126792 start.go:256] writing updated cluster config ...
	I1210 08:05:45.717777 1126792 ssh_runner.go:195] Run: rm -f paused
	I1210 08:05:45.721738 1126792 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 08:05:45.726180 1126792 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qbg99" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:45.731036 1126792 pod_ready.go:94] pod "coredns-66bc5c9577-qbg99" is "Ready"
	I1210 08:05:45.731063 1126792 pod_ready.go:86] duration metric: took 4.854719ms for pod "coredns-66bc5c9577-qbg99" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:45.733367 1126792 pod_ready.go:83] waiting for pod "etcd-kindnet-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:45.738326 1126792 pod_ready.go:94] pod "etcd-kindnet-945825" is "Ready"
	I1210 08:05:45.738354 1126792 pod_ready.go:86] duration metric: took 4.965589ms for pod "etcd-kindnet-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:45.741011 1126792 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:45.745943 1126792 pod_ready.go:94] pod "kube-apiserver-kindnet-945825" is "Ready"
	I1210 08:05:45.745972 1126792 pod_ready.go:86] duration metric: took 4.932398ms for pod "kube-apiserver-kindnet-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:45.748334 1126792 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:46.126927 1126792 pod_ready.go:94] pod "kube-controller-manager-kindnet-945825" is "Ready"
	I1210 08:05:46.126955 1126792 pod_ready.go:86] duration metric: took 378.588596ms for pod "kube-controller-manager-kindnet-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:46.327560 1126792 pod_ready.go:83] waiting for pod "kube-proxy-smnb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:46.726733 1126792 pod_ready.go:94] pod "kube-proxy-smnb9" is "Ready"
	I1210 08:05:46.726759 1126792 pod_ready.go:86] duration metric: took 399.122015ms for pod "kube-proxy-smnb9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:46.927259 1126792 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:47.327265 1126792 pod_ready.go:94] pod "kube-scheduler-kindnet-945825" is "Ready"
	I1210 08:05:47.327295 1126792 pod_ready.go:86] duration metric: took 400.007852ms for pod "kube-scheduler-kindnet-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:05:47.327309 1126792 pod_ready.go:40] duration metric: took 1.605535065s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 08:05:47.381735 1126792 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1210 08:05:47.386923 1126792 out.go:179] * Done! kubectl is now configured to use "kindnet-945825" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820886372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820897753Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820941675Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820957323Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820967374Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820979354Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820991735Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821002452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821025221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821069053Z" level=info msg="Connect containerd service"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821339826Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821931810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.835633697Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.835889266Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.835806303Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.838543186Z" level=info msg="Start recovering state"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862645834Z" level=info msg="Start event monitor"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862821648Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862884336Z" level=info msg="Start streaming server"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862946598Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.863002574Z" level=info msg="runtime interface starting up..."
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.863060848Z" level=info msg="starting plugins..."
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.863142670Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:51:16 no-preload-587009 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.866796941Z" level=info msg="containerd successfully booted in 0.072064s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:06:26.065616    8144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 08:06:26.066127    8144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 08:06:26.074459    8144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 08:06:26.075003    8144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 08:06:26.078371    8144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 08:06:26 up  6:48,  0 user,  load average: 1.64, 1.56, 1.47
	Linux no-preload-587009 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:06:23 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:06:23 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1208.
	Dec 10 08:06:23 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:06:23 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:06:23 no-preload-587009 kubelet[8014]: E1210 08:06:23.848758    8014 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:06:23 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:06:23 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:06:24 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1209.
	Dec 10 08:06:24 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:06:24 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:06:24 no-preload-587009 kubelet[8031]: E1210 08:06:24.619566    8031 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:06:24 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:06:24 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:06:25 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1210.
	Dec 10 08:06:25 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:06:25 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:06:25 no-preload-587009 kubelet[8054]: E1210 08:06:25.375053    8054 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:06:25 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:06:25 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:06:26 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1211.
	Dec 10 08:06:26 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:06:26 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:06:26 no-preload-587009 kubelet[8148]: E1210 08:06:26.156506    8148 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:06:26 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:06:26 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009: exit status 2 (434.275342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-587009" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-237317 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (329.267977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-237317 -n newest-cni-237317
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (299.888852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-237317 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (299.146187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-237317 -n newest-cni-237317
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (321.852687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-237317
helpers_test.go:244: (dbg) docker inspect newest-cni-237317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	        "Created": "2025-12-10T07:41:27.764165056Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1078597,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:51:14.851297935Z",
	            "FinishedAt": "2025-12-10T07:51:13.296430701Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hosts",
	        "LogPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d-json.log",
	        "Name": "/newest-cni-237317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-237317:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-237317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	                "LowerDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-237317",
	                "Source": "/var/lib/docker/volumes/newest-cni-237317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-237317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-237317",
	                "name.minikube.sigs.k8s.io": "newest-cni-237317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ce3a28f31774fef443c63794bb8a81b083cde3dd4d8dbf17e6f4c44906e905a",
	            "SandboxKey": "/var/run/docker/netns/1ce3a28f3177",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-237317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:6f:71:0d:8d:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8181aebce826300f2c9eb8f48208470a68f1816a212863fa9c220fbbaa29953b",
	                    "EndpointID": "c0800f293b750ff5d10633caea6a666c9ca543920cb52ef2db3d40a6e4851b98",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-237317",
	                        "a3bfe8c2955a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (311.809441ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-237317 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-237317 logs -n 25: (1.923079836s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-237317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ stop    │ -p no-preload-587009 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ addons  │ enable dashboard -p no-preload-587009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │                     │
	│ stop    │ -p newest-cni-237317 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-237317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │                     │
	│ image   │ newest-cni-237317 image list --format=json                                                                                                                                                                                                                 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:57 UTC │ 10 Dec 25 07:57 UTC │
	│ pause   │ -p newest-cni-237317 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:57 UTC │ 10 Dec 25 07:57 UTC │
	│ unpause │ -p newest-cni-237317 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:57 UTC │ 10 Dec 25 07:57 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:51:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:51:14.495415 1078428 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:51:14.495519 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495524 1078428 out.go:374] Setting ErrFile to fd 2...
	I1210 07:51:14.495529 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495772 1078428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:51:14.496198 1078428 out.go:368] Setting JSON to false
	I1210 07:51:14.497022 1078428 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23599,"bootTime":1765329476,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:51:14.497081 1078428 start.go:143] virtualization:  
	I1210 07:51:14.500489 1078428 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:51:14.503586 1078428 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:51:14.503671 1078428 notify.go:221] Checking for updates...
	I1210 07:51:14.509469 1078428 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:51:14.512370 1078428 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:14.515169 1078428 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:51:14.518012 1078428 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:51:14.520797 1078428 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:51:14.527169 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:14.527731 1078428 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:51:14.566042 1078428 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:51:14.566172 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.628663 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.618086592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.628767 1078428 docker.go:319] overlay module found
	I1210 07:51:14.631981 1078428 out.go:179] * Using the docker driver based on existing profile
	I1210 07:51:14.634809 1078428 start.go:309] selected driver: docker
	I1210 07:51:14.634833 1078428 start.go:927] validating driver "docker" against &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.634946 1078428 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:51:14.635637 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.728404 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.713293715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.728788 1078428 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:51:14.728810 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:14.728854 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:14.728892 1078428 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.732274 1078428 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:51:14.735049 1078428 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:51:14.738088 1078428 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:51:14.740969 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:14.741011 1078428 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:51:14.741020 1078428 cache.go:65] Caching tarball of preloaded images
	I1210 07:51:14.741100 1078428 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:51:14.741110 1078428 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:51:14.741232 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:14.741437 1078428 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:51:14.763634 1078428 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:51:14.763653 1078428 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:51:14.763668 1078428 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:51:14.763698 1078428 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:14.763755 1078428 start.go:364] duration metric: took 40.304µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:51:14.763774 1078428 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:51:14.763779 1078428 fix.go:54] fixHost starting: 
	I1210 07:51:14.764055 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:14.807148 1078428 fix.go:112] recreateIfNeeded on newest-cni-237317: state=Stopped err=<nil>
	W1210 07:51:14.807188 1078428 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:51:10.742298 1077343 out.go:252] * Restarting existing docker container for "no-preload-587009" ...
	I1210 07:51:10.742407 1077343 cli_runner.go:164] Run: docker start no-preload-587009
	I1210 07:51:11.039727 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:11.064793 1077343 kic.go:430] container "no-preload-587009" state is running.
	I1210 07:51:11.065794 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:11.090953 1077343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json ...
	I1210 07:51:11.091180 1077343 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:11.091248 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:11.118540 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:11.118875 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:11.118891 1077343 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:11.119530 1077343 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38028->127.0.0.1:33840: read: connection reset by peer
	I1210 07:51:14.269979 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.270011 1077343 ubuntu.go:182] provisioning hostname "no-preload-587009"
	I1210 07:51:14.270115 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.295536 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.295890 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.295901 1077343 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-587009 && echo "no-preload-587009" | sudo tee /etc/hostname
	I1210 07:51:14.452920 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.453011 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.478828 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.479134 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.479150 1077343 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-587009' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-587009/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-587009' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:14.626210 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:14.626250 1077343 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:14.626279 1077343 ubuntu.go:190] setting up certificates
	I1210 07:51:14.626296 1077343 provision.go:84] configureAuth start
	I1210 07:51:14.626367 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:14.653396 1077343 provision.go:143] copyHostCerts
	I1210 07:51:14.653479 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:14.653501 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:14.653585 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:14.653695 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:14.653708 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:14.653739 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:14.653813 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:14.653823 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:14.653849 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:14.653913 1077343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.no-preload-587009 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-587009]
	I1210 07:51:14.987883 1077343 provision.go:177] copyRemoteCerts
	I1210 07:51:14.987956 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:14.988006 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.016190 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.122129 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:15.168648 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:15.209293 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:15.238881 1077343 provision.go:87] duration metric: took 612.568009ms to configureAuth
	I1210 07:51:15.238905 1077343 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:15.239106 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:15.239113 1077343 machine.go:97] duration metric: took 4.147925818s to provisionDockerMachine
	I1210 07:51:15.239121 1077343 start.go:293] postStartSetup for "no-preload-587009" (driver="docker")
	I1210 07:51:15.239133 1077343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:15.239186 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:15.239227 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.259116 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.370554 1077343 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:15.375386 1077343 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:15.375413 1077343 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:15.375424 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:15.375477 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:15.375560 1077343 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:15.375669 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:15.386817 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:15.415888 1077343 start.go:296] duration metric: took 176.733864ms for postStartSetup
	I1210 07:51:15.416018 1077343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:15.416065 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.439058 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.548495 1077343 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:15.553596 1077343 fix.go:56] duration metric: took 4.831668845s for fixHost
	I1210 07:51:15.553633 1077343 start.go:83] releasing machines lock for "no-preload-587009", held for 4.831730515s
	I1210 07:51:15.553722 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:15.586973 1077343 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:15.587034 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.587329 1077343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:15.587396 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.629146 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.634697 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.746290 1077343 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:15.838801 1077343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:15.843040 1077343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:15.843111 1077343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:15.851174 1077343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:15.851245 1077343 start.go:496] detecting cgroup driver to use...
	I1210 07:51:15.851294 1077343 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:15.851351 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:15.869860 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:15.883702 1077343 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:15.883777 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:15.899664 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:15.913011 1077343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:16.034801 1077343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:16.150617 1077343 docker.go:234] disabling docker service ...
	I1210 07:51:16.150759 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:16.165840 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:16.180309 1077343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:16.307789 1077343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:16.432072 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:16.444962 1077343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:16.459040 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:16.467874 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:16.476775 1077343 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:16.476842 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:16.485489 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.494113 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:16.502936 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.511763 1077343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:16.519893 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:16.528779 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:16.537342 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:16.546138 1077343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:16.553912 1077343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:16.561714 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:16.748597 1077343 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:16.865266 1077343 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:16.865408 1077343 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:16.869450 1077343 start.go:564] Will wait 60s for crictl version
	I1210 07:51:16.869562 1077343 ssh_runner.go:195] Run: which crictl
	I1210 07:51:16.873018 1077343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:16.900099 1077343 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:16.900218 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.923700 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.947379 1077343 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:16.950227 1077343 cli_runner.go:164] Run: docker network inspect no-preload-587009 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:16.965229 1077343 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:16.969175 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:16.978619 1077343 kubeadm.go:884] updating cluster {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:16.978743 1077343 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:16.978798 1077343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:17.014301 1077343 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:17.014333 1077343 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:17.014341 1077343 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:17.014532 1077343 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-587009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:17.014625 1077343 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:17.044039 1077343 cni.go:84] Creating CNI manager for ""
	I1210 07:51:17.044060 1077343 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:17.044082 1077343 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:51:17.044104 1077343 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-587009 NodeName:no-preload-587009 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:17.044222 1077343 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-587009"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:17.044289 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:17.052024 1077343 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:17.052101 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:17.059722 1077343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:17.072494 1077343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:17.086253 1077343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 07:51:17.099376 1077343 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:17.102883 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:17.112330 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:17.225530 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:17.246996 1077343 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009 for IP: 192.168.85.2
	I1210 07:51:17.247021 1077343 certs.go:195] generating shared ca certs ...
	I1210 07:51:17.247038 1077343 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.247186 1077343 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:17.247238 1077343 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:17.247248 1077343 certs.go:257] generating profile certs ...
	I1210 07:51:17.247347 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.key
	I1210 07:51:17.247407 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key.841ee17a
	I1210 07:51:17.247454 1077343 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key
	I1210 07:51:17.247566 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:17.247604 1077343 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:17.247617 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:17.247646 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:17.247674 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:17.247712 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:17.247768 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:17.248384 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:17.265969 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:17.284190 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:17.302881 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:17.324073 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:17.341990 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:51:17.359614 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:17.377843 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:51:17.395426 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:17.413039 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:17.430522 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:17.447821 1077343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:17.460777 1077343 ssh_runner.go:195] Run: openssl version
	I1210 07:51:17.467243 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.474706 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:17.482273 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.485950 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.486025 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.526902 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:17.534224 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.541448 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:17.549037 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552765 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552832 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.595755 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:17.603128 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.610926 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:17.618981 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622497 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622563 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.663609 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:17.670957 1077343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:17.674676 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:17.715746 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:17.758195 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:17.799081 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:17.840047 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:17.880964 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:17.921878 1077343 kubeadm.go:401] StartCluster: {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:17.921988 1077343 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:17.922092 1077343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:17.951649 1077343 cri.go:89] found id: ""
	I1210 07:51:17.951796 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:17.959534 1077343 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:17.959555 1077343 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:17.959635 1077343 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:17.966920 1077343 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:17.967331 1077343 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.967425 1077343 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-587009" cluster setting kubeconfig missing "no-preload-587009" context setting]
	I1210 07:51:17.967687 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.968903 1077343 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:17.977669 1077343 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:51:17.977707 1077343 kubeadm.go:602] duration metric: took 18.146766ms to restartPrimaryControlPlane
	I1210 07:51:17.977718 1077343 kubeadm.go:403] duration metric: took 55.849318ms to StartCluster
	I1210 07:51:17.977733 1077343 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.977796 1077343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.978427 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.978652 1077343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:17.978958 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:17.979006 1077343 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:17.979072 1077343 addons.go:70] Setting storage-provisioner=true in profile "no-preload-587009"
	I1210 07:51:17.979085 1077343 addons.go:239] Setting addon storage-provisioner=true in "no-preload-587009"
	I1210 07:51:17.979106 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979123 1077343 addons.go:70] Setting dashboard=true in profile "no-preload-587009"
	I1210 07:51:17.979139 1077343 addons.go:239] Setting addon dashboard=true in "no-preload-587009"
	W1210 07:51:17.979155 1077343 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:17.979179 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979564 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.979606 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.982091 1077343 addons.go:70] Setting default-storageclass=true in profile "no-preload-587009"
	I1210 07:51:17.982247 1077343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-587009"
	I1210 07:51:17.983173 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.984528 1077343 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:17.987357 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:18.030694 1077343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:18.030828 1077343 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:18.034622 1077343 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:14.810511 1078428 out.go:252] * Restarting existing docker container for "newest-cni-237317" ...
	I1210 07:51:14.810602 1078428 cli_runner.go:164] Run: docker start newest-cni-237317
	I1210 07:51:15.140257 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:15.163514 1078428 kic.go:430] container "newest-cni-237317" state is running.
	I1210 07:51:15.165120 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:15.200178 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:15.200425 1078428 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:15.200484 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:15.234652 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:15.234972 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:15.234980 1078428 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:15.238112 1078428 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:51:18.394621 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.394726 1078428 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:51:18.394818 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.424081 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.424400 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.424411 1078428 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:51:18.589360 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.589454 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.613196 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.613511 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.613536 1078428 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:18.750663 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:18.750693 1078428 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:18.750726 1078428 ubuntu.go:190] setting up certificates
	I1210 07:51:18.750745 1078428 provision.go:84] configureAuth start
	I1210 07:51:18.750808 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:18.768151 1078428 provision.go:143] copyHostCerts
	I1210 07:51:18.768234 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:18.768250 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:18.768328 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:18.768450 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:18.768462 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:18.768492 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:18.768566 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:18.768583 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:18.768617 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:18.768682 1078428 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:51:19.084729 1078428 provision.go:177] copyRemoteCerts
	I1210 07:51:19.084804 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:19.084849 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.104109 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.203019 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:19.223435 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:19.240802 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:19.257611 1078428 provision.go:87] duration metric: took 506.840522ms to configureAuth
	I1210 07:51:19.257643 1078428 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:19.257850 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:19.257864 1078428 machine.go:97] duration metric: took 4.057430572s to provisionDockerMachine
	I1210 07:51:19.257873 1078428 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:51:19.257887 1078428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:19.257947 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:19.257992 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.274867 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.371336 1078428 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:19.375463 1078428 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:19.375497 1078428 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:19.375509 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:19.375559 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:19.375641 1078428 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:19.375745 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:19.386080 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:19.406230 1078428 start.go:296] duration metric: took 148.339109ms for postStartSetup
	I1210 07:51:19.406314 1078428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:19.406379 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.424523 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:18.034780 1077343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.034793 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:18.034874 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.037543 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:18.037568 1077343 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:18.037639 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.041604 1077343 addons.go:239] Setting addon default-storageclass=true in "no-preload-587009"
	I1210 07:51:18.041645 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:18.042060 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:18.105147 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.114730 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.115497 1077343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.115511 1077343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:18.115563 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.135449 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.230094 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:18.264441 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.283658 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:18.283729 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:18.329062 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:18.329133 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:18.353549 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:18.353629 1077343 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:18.357622 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.376127 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:18.376202 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:18.447999 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:18.448021 1077343 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:18.470186 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:18.470208 1077343 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:18.489233 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:18.489255 1077343 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:18.503805 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:18.503828 1077343 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:18.521545 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:18.521566 1077343 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:18.536611 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.053453 1077343 node_ready.go:35] waiting up to 6m0s for node "no-preload-587009" to be "Ready" ...
	W1210 07:51:19.053800 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053834 1077343 retry.go:31] will retry after 261.467752ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.053883 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053894 1077343 retry.go:31] will retry after 368.94912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.054089 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.054104 1077343 retry.go:31] will retry after 338.426434ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.315446 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.382015 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.382044 1077343 retry.go:31] will retry after 337.060159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.393358 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.424101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:19.491743 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.491780 1077343 retry.go:31] will retry after 471.881278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.538786 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.538838 1077343 retry.go:31] will retry after 528.879721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.719721 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.790713 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.790742 1077343 retry.go:31] will retry after 510.29035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.964160 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:20.068233 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:20.070746 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.070792 1077343 retry.go:31] will retry after 543.265245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.148457 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.148492 1077343 retry.go:31] will retry after 460.630823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.301882 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:20.397427 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.397476 1077343 retry.go:31] will retry after 801.303312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.524843 1078428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:19.530920 1078428 fix.go:56] duration metric: took 4.767134196s for fixHost
	I1210 07:51:19.530943 1078428 start.go:83] releasing machines lock for "newest-cni-237317", held for 4.767180038s
	I1210 07:51:19.531010 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:19.550838 1078428 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:19.550877 1078428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:19.550890 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.550934 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.570871 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.573219 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.666233 1078428 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:19.757488 1078428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:19.762554 1078428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:19.762646 1078428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:19.772614 1078428 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:19.772688 1078428 start.go:496] detecting cgroup driver to use...
	I1210 07:51:19.772735 1078428 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:19.772810 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:19.790830 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:19.808563 1078428 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:19.808685 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:19.825219 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:19.839550 1078428 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:19.957848 1078428 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:20.106011 1078428 docker.go:234] disabling docker service ...
	I1210 07:51:20.106089 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:20.124597 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:20.139030 1078428 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:20.264730 1078428 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:20.405057 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:20.418041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:20.434060 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:20.443707 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:20.453162 1078428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:20.453287 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:20.462485 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.471477 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:20.480685 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.489771 1078428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:20.498259 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:20.507883 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:20.516803 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:20.525782 1078428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:20.533254 1078428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:20.540718 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:20.693669 1078428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:20.831153 1078428 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:20.831249 1078428 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:20.835049 1078428 start.go:564] Will wait 60s for crictl version
	I1210 07:51:20.835127 1078428 ssh_runner.go:195] Run: which crictl
	I1210 07:51:20.838628 1078428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:20.863125 1078428 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:20.863217 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.884709 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.910533 1078428 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:20.913646 1078428 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:20.930416 1078428 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:20.934716 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:20.948181 1078428 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:51:20.951046 1078428 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:20.951211 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:20.951303 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:20.976663 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:20.976691 1078428 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:51:20.976756 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:21.000721 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:21.000745 1078428 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:21.000753 1078428 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:21.000851 1078428 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:21.000919 1078428 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:21.027129 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:21.027160 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:21.027182 1078428 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:51:21.027206 1078428 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:21.027326 1078428 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:21.027402 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:21.035339 1078428 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:21.035477 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:21.043040 1078428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:21.056144 1078428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:21.068486 1078428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:51:21.080830 1078428 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:21.084334 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:21.093747 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:21.227754 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:21.255098 1078428 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:51:21.255120 1078428 certs.go:195] generating shared ca certs ...
	I1210 07:51:21.255146 1078428 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:21.255299 1078428 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:21.255358 1078428 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:21.255372 1078428 certs.go:257] generating profile certs ...
	I1210 07:51:21.255486 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:51:21.255553 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:51:21.255599 1078428 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:51:21.255719 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:21.255759 1078428 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:21.255770 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:21.255801 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:21.255838 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:21.255870 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:21.255919 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:21.256545 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:21.311093 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:21.352581 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:21.373410 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:21.394506 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:21.429692 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:51:21.462387 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:21.492668 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:51:21.520168 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:21.538625 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:21.556477 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:21.574823 1078428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:21.587970 1078428 ssh_runner.go:195] Run: openssl version
	I1210 07:51:21.594082 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.601606 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:21.609233 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613206 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613303 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.655122 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:21.662415 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.669633 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:21.677051 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680913 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680973 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.722892 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:21.730172 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.737341 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:21.744828 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748681 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748767 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.790554 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:21.797952 1078428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:21.801618 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:21.842558 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:21.883251 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:21.924099 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:21.965360 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:22.007244 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:22.049094 1078428 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:22.049233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:22.049334 1078428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:22.093879 1078428 cri.go:89] found id: ""
	I1210 07:51:22.094034 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:22.108858 1078428 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:22.108920 1078428 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:22.109002 1078428 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:22.119866 1078428 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:22.120478 1078428 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-237317" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.120794 1078428 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-237317" cluster setting kubeconfig missing "newest-cni-237317" context setting]
	I1210 07:51:22.121355 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.123034 1078428 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:22.139211 1078428 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:51:22.139284 1078428 kubeadm.go:602] duration metric: took 30.344057ms to restartPrimaryControlPlane
	I1210 07:51:22.139309 1078428 kubeadm.go:403] duration metric: took 90.22699ms to StartCluster
	I1210 07:51:22.139351 1078428 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.139430 1078428 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.140615 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.141197 1078428 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:22.141378 1078428 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:22.149299 1078428 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-237317"
	I1210 07:51:22.149322 1078428 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-237317"
	I1210 07:51:22.149353 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.149966 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.141985 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:22.150417 1078428 addons.go:70] Setting dashboard=true in profile "newest-cni-237317"
	I1210 07:51:22.150441 1078428 addons.go:239] Setting addon dashboard=true in "newest-cni-237317"
	W1210 07:51:22.150449 1078428 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:22.150502 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.151022 1078428 addons.go:70] Setting default-storageclass=true in profile "newest-cni-237317"
	I1210 07:51:22.151064 1078428 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-237317"
	I1210 07:51:22.151139 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.151406 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.154353 1078428 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:22.159801 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:22.209413 1078428 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:22.216779 1078428 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.216810 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:22.216899 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.223328 1078428 addons.go:239] Setting addon default-storageclass=true in "newest-cni-237317"
	I1210 07:51:22.223372 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.223787 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.224255 1078428 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:22.227259 1078428 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:22.230643 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:22.230670 1078428 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:22.230738 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.262205 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.304886 1078428 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:22.304913 1078428 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:22.305020 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.320571 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.350629 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.414331 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.428355 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:22.476480 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:22.476506 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:22.499604 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.511381 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.511434 1078428 retry.go:31] will retry after 354.449722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.512377 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:22.512398 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:22.525695 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:22.525721 1078428 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:22.549890 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:22.549921 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:22.571318 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:22.571360 1078428 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:22.590078 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:22.590107 1078428 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:22.605317 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:22.605341 1078428 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:22.618168 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:22.618200 1078428 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:22.632058 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.632138 1078428 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:22.645108 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.866802 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:23.047272 1078428 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:51:23.047355 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:23.047482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047505 1078428 retry.go:31] will retry after 239.047353ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047709 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047727 1078428 retry.go:31] will retry after 188.716917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047796 1078428 retry.go:31] will retry after 517.712293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.237633 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:23.287256 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.302152 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.302252 1078428 retry.go:31] will retry after 469.586518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.346821 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.346867 1078428 retry.go:31] will retry after 517.463027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.548102 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.566734 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:23.638131 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.638161 1078428 retry.go:31] will retry after 398.122111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.772509 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.859471 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.859510 1078428 retry.go:31] will retry after 826.751645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.865483 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.933950 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.933981 1078428 retry.go:31] will retry after 776.320293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.037254 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:24.047892 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:24.103304 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.103348 1078428 retry.go:31] will retry after 781.805737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.609734 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:20.615162 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:20.763154 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763202 1077343 retry.go:31] will retry after 629.698549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.763322 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763340 1077343 retry.go:31] will retry after 624.408887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.054168 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:21.199599 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:21.288128 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.288156 1077343 retry.go:31] will retry after 1.429543278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.388486 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:21.393905 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:21.513396 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.513426 1077343 retry.go:31] will retry after 1.363983036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.522339 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.522370 1077343 retry.go:31] will retry after 1.881789089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.718226 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:22.784732 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.784765 1077343 retry.go:31] will retry after 2.14784628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.877998 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.948118 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.948146 1077343 retry.go:31] will retry after 2.832610868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:23.404396 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.467879 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.467914 1077343 retry.go:31] will retry after 2.135960827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.933362 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.999854 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.999895 1077343 retry.go:31] will retry after 3.6382738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.548307 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.687434 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:24.711319 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:24.773539 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.773577 1078428 retry.go:31] will retry after 997.771985ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:24.790786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.790863 1078428 retry.go:31] will retry after 982.839582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.886098 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.963470 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.963508 1078428 retry.go:31] will retry after 1.65409552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.047816 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.547590 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.771778 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:25.774151 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.936732 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.936801 1078428 retry.go:31] will retry after 1.015181303s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:25.947734 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.947767 1078428 retry.go:31] will retry after 1.482437442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.048146 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.547461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.617808 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:26.678401 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.678435 1078428 retry.go:31] will retry after 1.557494695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.952842 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.019482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.019568 1078428 retry.go:31] will retry after 1.273355747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.047573 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.431325 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:27.498014 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.498046 1078428 retry.go:31] will retry after 1.046464225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.548153 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.236708 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:28.293309 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:28.313086 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.313117 1078428 retry.go:31] will retry after 2.925748723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.376082 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.376136 1078428 retry.go:31] will retry after 3.458373128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.545585 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:28.548098 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:28.611335 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.611369 1078428 retry.go:31] will retry after 3.856495335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.047665 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:25.554994 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:25.604337 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:25.669224 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.669262 1077343 retry.go:31] will retry after 2.194006804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.781321 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.929708 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.929740 1077343 retry.go:31] will retry after 3.276039002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.863966 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.927673 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.927709 1077343 retry.go:31] will retry after 5.303571514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.054575 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:28.639292 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:28.698653 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.698686 1077343 retry.go:31] will retry after 3.005783671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.206806 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:29.264930 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.264960 1077343 retry.go:31] will retry after 2.489245949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.547947 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.047725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.548382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.048336 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.239688 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:31.305382 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.305411 1078428 retry.go:31] will retry after 5.48588333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.547900 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.835667 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:31.907250 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.907288 1078428 retry.go:31] will retry after 3.413940388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.047433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.468741 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:32.529582 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.529616 1078428 retry.go:31] will retry after 2.765741211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.547808 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.048388 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.547638 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.048299 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:30.554528 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:31.705403 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:31.754983 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:31.764053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.764088 1077343 retry.go:31] will retry after 6.263299309s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:31.824900 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.824937 1077343 retry.go:31] will retry after 8.063912103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:32.554572 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:33.232049 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:33.291801 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:33.291838 1077343 retry.go:31] will retry after 5.361341065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:34.554757 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:34.547845 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.048329 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.295932 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:35.322379 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:35.361522 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.361555 1078428 retry.go:31] will retry after 3.648316362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:35.394430 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.394485 1078428 retry.go:31] will retry after 5.549499405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.547462 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.048235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.547640 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.792053 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:36.857078 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:36.857110 1078428 retry.go:31] will retry after 8.697501731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:37.048326 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.548396 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.047529 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.547464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.010651 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:39.048217 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:39.071638 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.071669 1078428 retry.go:31] will retry after 13.355816146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:37.053891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:38.027881 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:38.116733 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.116768 1077343 retry.go:31] will retry after 12.105620641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.653613 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:38.715053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.715087 1077343 retry.go:31] will retry after 11.375750542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:39.554885 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:39.889521 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:39.947993 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.948032 1077343 retry.go:31] will retry after 6.34767532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.547555 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.048271 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.548333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.944176 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:41.005827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.005869 1078428 retry.go:31] will retry after 6.58383212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.047819 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:41.547642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.048470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.547646 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.047482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.548313 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:44.048345 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:42.054758 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:44.554149 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:44.547780 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.048251 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.547682 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.555791 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:45.648631 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:45.648667 1078428 retry.go:31] will retry after 11.694093059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.048267 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:46.547745 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.047711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.547488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.590140 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:47.657175 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:47.657216 1078428 retry.go:31] will retry after 17.707179987s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:48.047554 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.547523 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:49.048229 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:46.296554 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:46.375385 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.375418 1077343 retry.go:31] will retry after 17.860418691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:47.054540 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:49.054867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:50.091584 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:50.153219 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.153253 1077343 retry.go:31] will retry after 15.008999648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.223406 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:50.279259 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.279296 1077343 retry.go:31] will retry after 9.416080018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:49.547855 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.048310 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.547470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.048482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.547803 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.048220 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.428493 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:52.490932 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.490967 1078428 retry.go:31] will retry after 16.825164958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.548145 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.047509 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.548344 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.047578 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:51.553954 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:54.547773 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.047551 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.547690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.047804 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.547512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.048500 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.343638 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:57.401827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.401862 1078428 retry.go:31] will retry after 12.086669618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.548118 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.547566 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:59.047512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:56.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:58.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:59.696250 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:59.757338 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:59.757373 1077343 retry.go:31] will retry after 26.778697297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:59.547820 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.048277 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.547702 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.047690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.548160 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.047532 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.547658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.048174 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.547494 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:04.047488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:01.054130 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:03.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:04.236888 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:04.303052 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:04.303083 1077343 retry.go:31] will retry after 25.859676141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.163286 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.227326 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.227361 1077343 retry.go:31] will retry after 29.528693098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:04.547752 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.047684 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.364684 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.426426 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.426483 1078428 retry.go:31] will retry after 20.310563443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.547649 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.547647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.048386 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.548191 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.047499 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.547510 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.047557 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.316912 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:09.386785 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.386818 1078428 retry.go:31] will retry after 17.689212788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.489070 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:06.053981 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:08.554858 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:09.547482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:09.552880 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.552917 1078428 retry.go:31] will retry after 27.483688335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:10.047697 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:10.548124 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.047626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.548296 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.048335 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.548247 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.047495 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.547530 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:14.047549 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:11.053980 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:13.054863 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:15.055109 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:14.547736 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.548227 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.047516 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.548114 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.047567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.547679 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.048185 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.548203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:19.047660 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:17.055513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:19.553887 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:19.547978 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.048384 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.548389 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.048134 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.547434 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.048274 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.547540 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:22.547641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:22.572419 1078428 cri.go:89] found id: ""
	I1210 07:52:22.572446 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.572457 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:22.572464 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:22.572530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:22.596895 1078428 cri.go:89] found id: ""
	I1210 07:52:22.596923 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.596931 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:22.596938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:22.597000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:22.621678 1078428 cri.go:89] found id: ""
	I1210 07:52:22.621705 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.621713 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:22.621720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:22.621783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:22.646160 1078428 cri.go:89] found id: ""
	I1210 07:52:22.646188 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.646198 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:22.646205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:22.646270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:22.671641 1078428 cri.go:89] found id: ""
	I1210 07:52:22.671670 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.671680 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:22.671686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:22.671750 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:22.697149 1078428 cri.go:89] found id: ""
	I1210 07:52:22.697177 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.697187 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:22.697194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:22.697255 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:22.722276 1078428 cri.go:89] found id: ""
	I1210 07:52:22.722300 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.722318 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:22.722324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:22.722388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:22.751396 1078428 cri.go:89] found id: ""
	I1210 07:52:22.751422 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.751431 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:22.751440 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:22.751452 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:22.806571 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:22.806611 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:22.824584 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:22.824623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:22.902683 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:22.902704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:22.902719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:22.928289 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:22.928326 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:52:21.554922 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:24.054424 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:25.461464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:25.472201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:25.472303 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:25.498226 1078428 cri.go:89] found id: ""
	I1210 07:52:25.498253 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.498263 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:25.498269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:25.498331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:25.524731 1078428 cri.go:89] found id: ""
	I1210 07:52:25.524759 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.524777 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:25.524789 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:25.524855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:25.554155 1078428 cri.go:89] found id: ""
	I1210 07:52:25.554178 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.554187 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:25.554194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:25.554252 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:25.580553 1078428 cri.go:89] found id: ""
	I1210 07:52:25.580584 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.580593 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:25.580599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:25.580669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:25.606241 1078428 cri.go:89] found id: ""
	I1210 07:52:25.606309 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.606341 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:25.606369 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:25.606449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:25.630882 1078428 cri.go:89] found id: ""
	I1210 07:52:25.630912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.630921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:25.630928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:25.631028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:25.657178 1078428 cri.go:89] found id: ""
	I1210 07:52:25.657207 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.657215 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:25.657221 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:25.657282 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:25.686580 1078428 cri.go:89] found id: ""
	I1210 07:52:25.686604 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.686612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:25.686622 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:25.686634 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:25.737209 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:52:25.742985 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:25.743060 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:52:25.816909 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.817156 1078428 retry.go:31] will retry after 25.212576039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.818420 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:25.818454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:25.889855 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:25.889919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:25.889939 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:25.915022 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:25.915058 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:27.076870 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:27.134892 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:27.134924 1078428 retry.go:31] will retry after 48.20102621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:28.443268 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:28.454097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:28.454172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:28.482759 1078428 cri.go:89] found id: ""
	I1210 07:52:28.482789 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.482798 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:28.482805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:28.482868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:28.507737 1078428 cri.go:89] found id: ""
	I1210 07:52:28.507760 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.507769 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:28.507775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:28.507836 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:28.532881 1078428 cri.go:89] found id: ""
	I1210 07:52:28.532907 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.532916 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:28.532923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:28.532989 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:28.562425 1078428 cri.go:89] found id: ""
	I1210 07:52:28.562451 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.562460 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:28.562489 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:28.562551 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:28.587926 1078428 cri.go:89] found id: ""
	I1210 07:52:28.587952 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.587961 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:28.587967 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:28.588026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:28.613523 1078428 cri.go:89] found id: ""
	I1210 07:52:28.613593 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.613617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:28.613638 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:28.613730 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:28.637796 1078428 cri.go:89] found id: ""
	I1210 07:52:28.637864 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.637888 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:28.637907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:28.637993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:28.666907 1078428 cri.go:89] found id: ""
	I1210 07:52:28.666937 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.666946 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:28.666956 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:28.666968 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:28.722569 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:28.722604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:28.738517 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:28.738592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:28.814307 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:28.814366 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:28.814395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:28.842824 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:28.842905 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:26.536333 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:26.554155 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:26.621759 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:26.621788 1077343 retry.go:31] will retry after 32.881374862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:29.054917 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:30.163626 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:30.226039 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:30.226073 1077343 retry.go:31] will retry after 27.175178767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:31.380548 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:31.391083 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:31.391159 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:31.416470 1078428 cri.go:89] found id: ""
	I1210 07:52:31.416496 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.416504 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:31.416510 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:31.416570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:31.441740 1078428 cri.go:89] found id: ""
	I1210 07:52:31.441767 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.441776 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:31.441782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:31.441843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:31.465834 1078428 cri.go:89] found id: ""
	I1210 07:52:31.465860 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.465869 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:31.465875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:31.465935 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:31.492061 1078428 cri.go:89] found id: ""
	I1210 07:52:31.492085 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.492093 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:31.492099 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:31.492177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:31.515891 1078428 cri.go:89] found id: ""
	I1210 07:52:31.515971 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.515993 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:31.516010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:31.516096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:31.540039 1078428 cri.go:89] found id: ""
	I1210 07:52:31.540061 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.540069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:31.540076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:31.540169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:31.565345 1078428 cri.go:89] found id: ""
	I1210 07:52:31.565372 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.565388 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:31.565395 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:31.565513 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:31.590011 1078428 cri.go:89] found id: ""
	I1210 07:52:31.590035 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.590044 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:31.590074 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:31.590089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:31.656796 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:31.656816 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:31.656828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:31.681821 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:31.681855 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:31.709786 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:31.709815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:31.764688 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:31.764728 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.283681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:34.296241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:34.296314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:34.337179 1078428 cri.go:89] found id: ""
	I1210 07:52:34.337201 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.337210 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:34.337216 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:34.337274 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:34.369583 1078428 cri.go:89] found id: ""
	I1210 07:52:34.369611 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.369619 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:34.369625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:34.369683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:34.395566 1078428 cri.go:89] found id: ""
	I1210 07:52:34.395591 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.395600 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:34.395606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:34.395688 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:34.419610 1078428 cri.go:89] found id: ""
	I1210 07:52:34.419677 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.419702 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:34.419718 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:34.419797 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:34.444441 1078428 cri.go:89] found id: ""
	I1210 07:52:34.444511 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.444535 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:34.444550 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:34.444627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:34.469517 1078428 cri.go:89] found id: ""
	I1210 07:52:34.469540 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.469549 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:34.469556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:34.469618 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:34.494093 1078428 cri.go:89] found id: ""
	I1210 07:52:34.494120 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.494129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:34.494136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:34.494196 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	W1210 07:52:31.554771 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:34.054729 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:34.756990 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:34.831836 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:34.831956 1077343 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:34.518575 1078428 cri.go:89] found id: ""
	I1210 07:52:34.518658 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.518674 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:34.518685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:34.518698 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.534743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:34.534770 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:34.597542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:34.597564 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:34.597577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:34.622841 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:34.622876 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:34.653362 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:34.653395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.036872 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:37.117418 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.117451 1078428 retry.go:31] will retry after 42.271832156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.209642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:37.220263 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:37.220360 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:37.244517 1078428 cri.go:89] found id: ""
	I1210 07:52:37.244544 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.244552 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:37.244558 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:37.244619 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:37.269073 1078428 cri.go:89] found id: ""
	I1210 07:52:37.269099 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.269108 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:37.269114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:37.269175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:37.292561 1078428 cri.go:89] found id: ""
	I1210 07:52:37.292587 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.292596 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:37.292604 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:37.292661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:37.330286 1078428 cri.go:89] found id: ""
	I1210 07:52:37.330312 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.330321 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:37.330328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:37.330388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:37.362527 1078428 cri.go:89] found id: ""
	I1210 07:52:37.362555 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.362564 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:37.362570 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:37.362633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:37.387887 1078428 cri.go:89] found id: ""
	I1210 07:52:37.387912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.387921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:37.387927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:37.387988 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:37.412303 1078428 cri.go:89] found id: ""
	I1210 07:52:37.412329 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.412337 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:37.412344 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:37.412451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:37.436571 1078428 cri.go:89] found id: ""
	I1210 07:52:37.436596 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.436605 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:37.436614 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:37.436626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:37.462030 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:37.462074 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:37.489847 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:37.489875 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.545757 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:37.545792 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:37.561730 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:37.561763 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:37.627065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:52:36.554875 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:39.054027 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:40.127737 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:40.139792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:40.139876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:40.166917 1078428 cri.go:89] found id: ""
	I1210 07:52:40.166944 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.166952 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:40.166964 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:40.167028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:40.193972 1078428 cri.go:89] found id: ""
	I1210 07:52:40.194000 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.194009 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:40.194015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:40.194111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:40.226660 1078428 cri.go:89] found id: ""
	I1210 07:52:40.226693 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.226702 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:40.226709 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:40.226774 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:40.257013 1078428 cri.go:89] found id: ""
	I1210 07:52:40.257056 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.257067 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:40.257074 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:40.257140 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:40.282449 1078428 cri.go:89] found id: ""
	I1210 07:52:40.282500 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.282509 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:40.282516 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:40.282580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:40.332986 1078428 cri.go:89] found id: ""
	I1210 07:52:40.333018 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.333027 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:40.333050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:40.333188 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:40.366223 1078428 cri.go:89] found id: ""
	I1210 07:52:40.366258 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.366268 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:40.366275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:40.366347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:40.393136 1078428 cri.go:89] found id: ""
	I1210 07:52:40.393163 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.393171 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:40.393181 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:40.393193 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:40.422285 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:40.422314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:40.481326 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:40.481365 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:40.497675 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:40.497725 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:40.562074 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:40.562093 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:40.562106 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:43.088690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:43.099750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:43.099828 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:43.124516 1078428 cri.go:89] found id: ""
	I1210 07:52:43.124552 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.124561 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:43.124567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:43.124628 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:43.153325 1078428 cri.go:89] found id: ""
	I1210 07:52:43.153347 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.153356 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:43.153362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:43.153423 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:43.178405 1078428 cri.go:89] found id: ""
	I1210 07:52:43.178429 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.178437 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:43.178443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:43.178609 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:43.201768 1078428 cri.go:89] found id: ""
	I1210 07:52:43.201791 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.201800 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:43.201806 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:43.201865 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:43.225907 1078428 cri.go:89] found id: ""
	I1210 07:52:43.225931 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.225940 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:43.225946 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:43.226004 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:43.250803 1078428 cri.go:89] found id: ""
	I1210 07:52:43.250828 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.250837 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:43.250843 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:43.250916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:43.275081 1078428 cri.go:89] found id: ""
	I1210 07:52:43.275147 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.275161 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:43.275168 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:43.275245 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:43.306794 1078428 cri.go:89] found id: ""
	I1210 07:52:43.306827 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.306836 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:43.306845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:43.306857 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:43.337826 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:43.337854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:43.396050 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:43.396089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:43.413002 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:43.413031 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:43.479541 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:43.479565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:43.479578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:52:41.054361 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:43.054892 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:46.005454 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:46.017579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:46.017658 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:46.053539 1078428 cri.go:89] found id: ""
	I1210 07:52:46.053570 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.053579 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:46.053585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:46.053649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:46.088548 1078428 cri.go:89] found id: ""
	I1210 07:52:46.088572 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.088581 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:46.088596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:46.088660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:46.126497 1078428 cri.go:89] found id: ""
	I1210 07:52:46.126571 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.126594 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:46.126613 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:46.126734 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:46.150556 1078428 cri.go:89] found id: ""
	I1210 07:52:46.150626 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.150643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:46.150651 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:46.150719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:46.174996 1078428 cri.go:89] found id: ""
	I1210 07:52:46.175019 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.175027 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:46.175033 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:46.175107 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:46.199701 1078428 cri.go:89] found id: ""
	I1210 07:52:46.199726 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.199735 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:46.199742 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:46.199845 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:46.224632 1078428 cri.go:89] found id: ""
	I1210 07:52:46.224657 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.224666 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:46.224672 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:46.224752 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:46.248234 1078428 cri.go:89] found id: ""
	I1210 07:52:46.248259 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.248267 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:46.248277 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:46.248334 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:46.264183 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:46.264221 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:46.342979 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:46.343063 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:46.343092 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:46.369476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:46.369511 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:46.397302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:46.397339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.952567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.962857 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.962931 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.992562 1078428 cri.go:89] found id: ""
	I1210 07:52:48.992589 1078428 logs.go:282] 0 containers: []
	W1210 07:52:48.992599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.992606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.992671 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:49.018277 1078428 cri.go:89] found id: ""
	I1210 07:52:49.018303 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.018312 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:49.018318 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:49.018387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:49.045715 1078428 cri.go:89] found id: ""
	I1210 07:52:49.045743 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.045752 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:49.045758 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:49.045826 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:49.083318 1078428 cri.go:89] found id: ""
	I1210 07:52:49.083348 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.083358 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:49.083364 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:49.083422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:49.109936 1078428 cri.go:89] found id: ""
	I1210 07:52:49.109958 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.109966 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:49.109989 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:49.110049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:49.134580 1078428 cri.go:89] found id: ""
	I1210 07:52:49.134607 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.134617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:49.134623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:49.134681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:49.159828 1078428 cri.go:89] found id: ""
	I1210 07:52:49.159906 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.159924 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:49.159931 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:49.160011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:49.184837 1078428 cri.go:89] found id: ""
	I1210 07:52:49.184862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.184872 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:49.184881 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:49.184902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:49.210656 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:49.210691 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:49.241224 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:49.241256 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:49.303253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:49.303297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:49.319808 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:49.319838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:49.389423 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:52:45.554347 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:47.554702 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:50.054996 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:51.030067 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:51.093289 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:51.093415 1078428 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:51.889686 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.900249 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.900353 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.925533 1078428 cri.go:89] found id: ""
	I1210 07:52:51.925559 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.925567 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.925621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.925706 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.950161 1078428 cri.go:89] found id: ""
	I1210 07:52:51.950186 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.950194 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.950201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.950280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.976938 1078428 cri.go:89] found id: ""
	I1210 07:52:51.976964 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.976972 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.976979 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.977038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:52.006745 1078428 cri.go:89] found id: ""
	I1210 07:52:52.006841 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.006865 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:52.006887 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:52.007015 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:52.033557 1078428 cri.go:89] found id: ""
	I1210 07:52:52.033585 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.033595 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:52.033601 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:52.033672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:52.066821 1078428 cri.go:89] found id: ""
	I1210 07:52:52.066850 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.066860 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:52.066867 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:52.066929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:52.101024 1078428 cri.go:89] found id: ""
	I1210 07:52:52.101051 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.101060 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:52.101067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:52.101128 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:52.130045 1078428 cri.go:89] found id: ""
	I1210 07:52:52.130070 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.130079 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:52.130088 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:52.130100 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:52.184627 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:52.184662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:52.200733 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:52.200759 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:52.265577 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:52.265610 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:52.265626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:52.291354 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:52.291390 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:52:52.555048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:55.054639 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:54.834203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:54.845400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:54.845510 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.871357 1078428 cri.go:89] found id: ""
	I1210 07:52:54.871383 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.871392 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.871399 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.871463 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.897322 1078428 cri.go:89] found id: ""
	I1210 07:52:54.897352 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.897360 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.897366 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.897425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.922291 1078428 cri.go:89] found id: ""
	I1210 07:52:54.922320 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.922329 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.922334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.922405 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.947056 1078428 cri.go:89] found id: ""
	I1210 07:52:54.947080 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.947089 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.947095 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.947155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.972572 1078428 cri.go:89] found id: ""
	I1210 07:52:54.972599 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.972608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.972614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.972675 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.997657 1078428 cri.go:89] found id: ""
	I1210 07:52:54.997685 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.997694 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.997700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.997777 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:55.025796 1078428 cri.go:89] found id: ""
	I1210 07:52:55.025819 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.025829 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:55.025835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:55.026185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:55.069593 1078428 cri.go:89] found id: ""
	I1210 07:52:55.069631 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.069640 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:55.069649 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:55.069662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:55.135748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:55.135788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:55.151784 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:55.151815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:55.220457 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:55.220480 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:55.220495 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:55.245834 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:55.245869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.774707 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:57.785110 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:57.785178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:57.810275 1078428 cri.go:89] found id: ""
	I1210 07:52:57.810302 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.810320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:57.810328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:57.810389 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.838839 1078428 cri.go:89] found id: ""
	I1210 07:52:57.838862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.838871 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.838877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.838937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.863185 1078428 cri.go:89] found id: ""
	I1210 07:52:57.863212 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.863221 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.863227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.863287 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.890204 1078428 cri.go:89] found id: ""
	I1210 07:52:57.890234 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.890244 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.890250 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.890314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.916593 1078428 cri.go:89] found id: ""
	I1210 07:52:57.916616 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.916624 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.916630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.916690 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.940351 1078428 cri.go:89] found id: ""
	I1210 07:52:57.940373 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.940381 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.940387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.940448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.965417 1078428 cri.go:89] found id: ""
	I1210 07:52:57.965453 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.965462 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.965469 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:57.965535 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:57.989157 1078428 cri.go:89] found id: ""
	I1210 07:52:57.989183 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.989192 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:57.989202 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:57.989213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:58.015326 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:58.015366 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:58.055222 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:58.055248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:58.115866 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:58.115945 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:58.131823 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:58.131852 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:58.196880 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:57.402101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:57.460754 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:57.460865 1077343 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:52:57.554262 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:59.503589 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:59.554549 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:59.576553 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:59.576655 1077343 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:59.579701 1077343 out.go:179] * Enabled addons: 
	I1210 07:52:59.582536 1077343 addons.go:530] duration metric: took 1m41.60352286s for enable addons: enabled=[]
	I1210 07:53:00.697148 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:00.707593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:00.707661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:00.735938 1078428 cri.go:89] found id: ""
	I1210 07:53:00.735962 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.735971 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:00.735977 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:00.736039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:00.759785 1078428 cri.go:89] found id: ""
	I1210 07:53:00.759808 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.759817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:00.759823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:00.759887 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:00.784529 1078428 cri.go:89] found id: ""
	I1210 07:53:00.784552 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.784561 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:00.784567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:00.784641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.813420 1078428 cri.go:89] found id: ""
	I1210 07:53:00.813443 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.813452 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.813459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.813518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.838413 1078428 cri.go:89] found id: ""
	I1210 07:53:00.838439 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.838449 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.838455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.838559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.862923 1078428 cri.go:89] found id: ""
	I1210 07:53:00.862949 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.862968 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.862975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.863034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.890339 1078428 cri.go:89] found id: ""
	I1210 07:53:00.890366 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.890375 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.890381 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:00.890440 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:00.916963 1078428 cri.go:89] found id: ""
	I1210 07:53:00.916992 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.917001 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:00.917010 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.917022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.972565 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.972601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:00.990064 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.990154 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:01.068497 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:01.068521 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:01.068534 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:01.097602 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:01.097641 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.628666 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.639440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.639518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.664498 1078428 cri.go:89] found id: ""
	I1210 07:53:03.664523 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.664531 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.664538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.664601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.688357 1078428 cri.go:89] found id: ""
	I1210 07:53:03.688382 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.688391 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.688397 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.688460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.712874 1078428 cri.go:89] found id: ""
	I1210 07:53:03.712898 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.712906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.712913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.712990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.737610 1078428 cri.go:89] found id: ""
	I1210 07:53:03.737635 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.737643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.737650 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.737712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.762668 1078428 cri.go:89] found id: ""
	I1210 07:53:03.762695 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.762703 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.762710 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.762769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.795710 1078428 cri.go:89] found id: ""
	I1210 07:53:03.795732 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.795741 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.795747 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.795809 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.819247 1078428 cri.go:89] found id: ""
	I1210 07:53:03.819275 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.819285 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.819291 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:03.819355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:03.842854 1078428 cri.go:89] found id: ""
	I1210 07:53:03.842881 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.842891 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:03.842900 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.842911 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.858681 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.858748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.922352 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.922383 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:03.922401 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:03.948481 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.948520 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.977218 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.977247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:02.054010 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:04.555038 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:06.532410 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.544357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.544451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.576472 1078428 cri.go:89] found id: ""
	I1210 07:53:06.576500 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.576511 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.576517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.576581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.609024 1078428 cri.go:89] found id: ""
	I1210 07:53:06.609051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.609061 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.609067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.609134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.636182 1078428 cri.go:89] found id: ""
	I1210 07:53:06.636209 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.636218 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.636224 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.636286 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.664610 1078428 cri.go:89] found id: ""
	I1210 07:53:06.664677 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.664699 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.664720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.664812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.690522 1078428 cri.go:89] found id: ""
	I1210 07:53:06.690548 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.690557 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.690564 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.690626 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.716006 1078428 cri.go:89] found id: ""
	I1210 07:53:06.716035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.716044 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.716050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.716115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.740705 1078428 cri.go:89] found id: ""
	I1210 07:53:06.740726 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.740734 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.740741 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:06.740803 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:06.764831 1078428 cri.go:89] found id: ""
	I1210 07:53:06.764852 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.764860 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:06.764869 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.764881 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.820337 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.820372 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:06.836899 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.836931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.902143 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.902164 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:06.902178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:06.927253 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.927289 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.458854 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:09.469382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:09.469466 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.494769 1078428 cri.go:89] found id: ""
	I1210 07:53:09.494791 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.494799 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.494805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.494866 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1210 07:53:07.053986 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:09.554520 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:09.520347 1078428 cri.go:89] found id: ""
	I1210 07:53:09.520374 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.520383 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.520390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.520454 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.549983 1078428 cri.go:89] found id: ""
	I1210 07:53:09.550010 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.550019 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.550025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.550085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.588794 1078428 cri.go:89] found id: ""
	I1210 07:53:09.588821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.588830 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.588836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.588895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.617370 1078428 cri.go:89] found id: ""
	I1210 07:53:09.617393 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.617401 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.617407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.617465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.645730 1078428 cri.go:89] found id: ""
	I1210 07:53:09.645755 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.645779 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.645786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.645850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.672062 1078428 cri.go:89] found id: ""
	I1210 07:53:09.672088 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.672097 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.672103 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:09.672174 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:09.695770 1078428 cri.go:89] found id: ""
	I1210 07:53:09.695793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.695802 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:09.695811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:09.695822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:09.721144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.721180 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.748337 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.748367 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.802348 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.802384 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.818196 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.818226 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.884770 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.385627 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:12.396288 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:12.396367 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:12.421158 1078428 cri.go:89] found id: ""
	I1210 07:53:12.421194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.421204 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:12.421210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:12.421281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:12.446171 1078428 cri.go:89] found id: ""
	I1210 07:53:12.446206 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.446216 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:12.446222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:12.446294 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.470791 1078428 cri.go:89] found id: ""
	I1210 07:53:12.470818 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.470828 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.470836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.470895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.499441 1078428 cri.go:89] found id: ""
	I1210 07:53:12.499467 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.499476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.499483 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.499561 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.524188 1078428 cri.go:89] found id: ""
	I1210 07:53:12.524211 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.524219 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.524225 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.524285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.550501 1078428 cri.go:89] found id: ""
	I1210 07:53:12.550528 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.550537 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.550543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.550617 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.578576 1078428 cri.go:89] found id: ""
	I1210 07:53:12.578602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.578611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.578616 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:12.578687 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:12.612078 1078428 cri.go:89] found id: ""
	I1210 07:53:12.612113 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.612122 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:12.612132 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.612144 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:12.645096 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.645125 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.700179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.700217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.715578 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.715606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.781369 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.781391 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:12.781403 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:53:11.554633 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:14.054508 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:15.306176 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:15.317232 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:15.317315 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:15.336640 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:53:15.353595 1078428 cri.go:89] found id: ""
	I1210 07:53:15.353626 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.353635 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:15.353642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:15.353703 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1210 07:53:15.421893 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:15.421994 1078428 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:15.422157 1078428 cri.go:89] found id: ""
	I1210 07:53:15.422177 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.422185 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:15.422192 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:15.422270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:15.447660 1078428 cri.go:89] found id: ""
	I1210 07:53:15.447684 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.447693 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:15.447699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:15.447763 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:15.471893 1078428 cri.go:89] found id: ""
	I1210 07:53:15.471918 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.471927 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:15.471934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:15.472003 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.496880 1078428 cri.go:89] found id: ""
	I1210 07:53:15.496915 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.496924 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.496930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.496999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.525007 1078428 cri.go:89] found id: ""
	I1210 07:53:15.525043 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.525055 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.525061 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.525138 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.556732 1078428 cri.go:89] found id: ""
	I1210 07:53:15.556776 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.556785 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.556792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:15.556864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:15.592802 1078428 cri.go:89] found id: ""
	I1210 07:53:15.592835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.592844 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:15.592854 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.592866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.660809 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.660846 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.677009 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.677040 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.743204 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.743227 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:15.743239 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:15.768020 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.768053 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:18.297028 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:18.310128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:18.310198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:18.340476 1078428 cri.go:89] found id: ""
	I1210 07:53:18.340572 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.340599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:18.340642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:18.340769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
E1210 07:57:35.782590  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
	I1210 07:53:18.369516 1078428 cri.go:89] found id: ""
	I1210 07:53:18.369582 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.369614 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:18.369633 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:18.369753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:18.396295 1078428 cri.go:89] found id: ""
	I1210 07:53:18.396321 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.396330 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:18.396336 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:18.396428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:18.422012 1078428 cri.go:89] found id: ""
	I1210 07:53:18.422037 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.422046 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:18.422052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:18.422164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:18.446495 1078428 cri.go:89] found id: ""
	I1210 07:53:18.446518 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.446526 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:18.446532 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:18.446600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:18.471650 1078428 cri.go:89] found id: ""
	I1210 07:53:18.471674 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.471682 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:18.471688 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:18.471779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.495591 1078428 cri.go:89] found id: ""
	I1210 07:53:18.495616 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.495624 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.495631 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:18.495694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:18.523464 1078428 cri.go:89] found id: ""
	I1210 07:53:18.523489 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.523497 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:18.523506 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.523518 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.585434 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.585481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.610315 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.610344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.674572 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.674593 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:18.674607 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:18.699401 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.699435 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:19.389521 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:53:19.452005 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:19.452105 1078428 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:19.455408 1078428 out.go:179] * Enabled addons: 
	I1210 07:53:19.458237 1078428 addons.go:530] duration metric: took 1m57.316864384s for enable addons: enabled=[]
	W1210 07:53:16.054718 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:18.554815 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:21.227168 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:21.237506 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:21.237577 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:21.261812 1078428 cri.go:89] found id: ""
	I1210 07:53:21.261842 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.261852 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:21.261858 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:21.261921 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:21.289741 1078428 cri.go:89] found id: ""
	I1210 07:53:21.289767 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.289787 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:21.289794 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:21.289855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:21.331373 1078428 cri.go:89] found id: ""
	I1210 07:53:21.331400 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.331410 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:21.331415 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:21.331534 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:21.364401 1078428 cri.go:89] found id: ""
	I1210 07:53:21.364427 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.364436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:21.364443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:21.364504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:21.395936 1078428 cri.go:89] found id: ""
	I1210 07:53:21.395965 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.395975 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:21.395981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:21.396044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:21.420965 1078428 cri.go:89] found id: ""
	I1210 07:53:21.420996 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.421005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:21.421012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:21.421073 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:21.446318 1078428 cri.go:89] found id: ""
	I1210 07:53:21.446345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.446354 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:21.446360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:21.446422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:21.475470 1078428 cri.go:89] found id: ""
	I1210 07:53:21.475499 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.475509 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:21.475521 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:21.475537 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.530313 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.530354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.548651 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.548737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.632055 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.632137 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:21.632157 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:21.659428 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.659466 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.192421 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:24.203056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:24.203137 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:24.232457 1078428 cri.go:89] found id: ""
	I1210 07:53:24.232493 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.232502 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:24.232509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:24.232576 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:24.260730 1078428 cri.go:89] found id: ""
	I1210 07:53:24.260758 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.260768 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:24.260774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:24.260837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:24.284981 1078428 cri.go:89] found id: ""
	I1210 07:53:24.285009 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.285018 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:24.285024 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:24.285086 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:24.316578 1078428 cri.go:89] found id: ""
	I1210 07:53:24.316604 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.316613 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:24.316619 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:24.316678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:24.353587 1078428 cri.go:89] found id: ""
	I1210 07:53:24.353622 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.353638 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:24.353645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:24.353740 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:24.384460 1078428 cri.go:89] found id: ""
	I1210 07:53:24.384483 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.384492 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:24.384498 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:24.384562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:24.414252 1078428 cri.go:89] found id: ""
	I1210 07:53:24.414280 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.414290 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:24.414296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:24.414361 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:24.442225 1078428 cri.go:89] found id: ""
	I1210 07:53:24.442247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.442256 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:24.442265 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:24.442276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:24.467596 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.467629 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:21.054852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:23.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:24.499949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.499977 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.558185 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.558223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:24.576232 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:24.576264 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:24.646699 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.148382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:27.158984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:27.159102 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:27.183857 1078428 cri.go:89] found id: ""
	I1210 07:53:27.183927 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.183943 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:27.183951 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:27.184028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:27.207461 1078428 cri.go:89] found id: ""
	I1210 07:53:27.207529 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.207554 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:27.207568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:27.207645 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:27.234849 1078428 cri.go:89] found id: ""
	I1210 07:53:27.234876 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.234884 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:27.234890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:27.234948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:27.258887 1078428 cri.go:89] found id: ""
	I1210 07:53:27.258910 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.258919 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:27.258926 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:27.258983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:27.283113 1078428 cri.go:89] found id: ""
	I1210 07:53:27.283189 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.283206 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:27.283214 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:27.283283 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:27.324968 1078428 cri.go:89] found id: ""
	I1210 07:53:27.324994 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.325004 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:27.325010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:27.325070 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:27.355711 1078428 cri.go:89] found id: ""
	I1210 07:53:27.355739 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.355749 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:27.355755 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:27.355817 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:27.383387 1078428 cri.go:89] found id: ""
	I1210 07:53:27.383424 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.383435 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:27.383445 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:27.383456 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:27.408324 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:27.408363 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:27.438348 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:27.438424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:27.496282 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:27.496317 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:27.512354 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:27.512385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.586988 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:53:26.054246 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:28.554092 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:30.088030 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:30.100373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:30.100449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:30.127922 1078428 cri.go:89] found id: ""
	I1210 07:53:30.127998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.128023 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:30.128041 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:30.128120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:30.160672 1078428 cri.go:89] found id: ""
	I1210 07:53:30.160699 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.160709 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:30.160722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:30.160784 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:30.186050 1078428 cri.go:89] found id: ""
	I1210 07:53:30.186077 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.186086 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:30.186093 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:30.186157 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:30.211107 1078428 cri.go:89] found id: ""
	I1210 07:53:30.211132 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.211141 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:30.211147 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:30.211213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:30.235571 1078428 cri.go:89] found id: ""
	I1210 07:53:30.235598 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.235608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:30.235615 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:30.235678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:30.264308 1078428 cri.go:89] found id: ""
	I1210 07:53:30.264331 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.264339 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:30.264346 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:30.264413 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:30.288489 1078428 cri.go:89] found id: ""
	I1210 07:53:30.288557 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.288581 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:30.288594 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:30.288673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:30.318600 1078428 cri.go:89] found id: ""
	I1210 07:53:30.318628 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.318638 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:30.318648 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:30.318679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:30.359074 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:30.359103 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.417146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.417182 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:30.432931 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:30.432960 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:30.497452 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:30.497474 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:30.497487 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.027579 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:33.038128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:33.038197 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:33.063535 1078428 cri.go:89] found id: ""
	I1210 07:53:33.063560 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.063572 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:33.063578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:33.063642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:33.087384 1078428 cri.go:89] found id: ""
	I1210 07:53:33.087406 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.087414 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:33.087420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:33.087478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:33.112186 1078428 cri.go:89] found id: ""
	I1210 07:53:33.112247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.112258 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:33.112265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:33.112326 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:33.136102 1078428 cri.go:89] found id: ""
	I1210 07:53:33.136125 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.136133 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:33.136139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:33.136202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:33.160865 1078428 cri.go:89] found id: ""
	I1210 07:53:33.160931 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.160957 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:33.160986 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:33.161071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:33.185964 1078428 cri.go:89] found id: ""
	I1210 07:53:33.186031 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.186054 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:33.186075 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:33.186150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:33.211060 1078428 cri.go:89] found id: ""
	I1210 07:53:33.211086 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.211095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:33.211100 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:33.211180 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:33.236111 1078428 cri.go:89] found id: ""
	I1210 07:53:33.236180 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.236213 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:33.236227 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.236251 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:33.252003 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:33.252029 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:33.315902 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:33.315967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:33.316003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.342524 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:33.342604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:33.377391 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:33.377419 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:30.554186 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:33.054061 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:35.054801 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:35.933860 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.945070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.945142 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.971394 1078428 cri.go:89] found id: ""
	I1210 07:53:35.971423 1078428 logs.go:282] 0 containers: []
	W1210 07:53:35.971432 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.971438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.971501 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:36.005170 1078428 cri.go:89] found id: ""
	I1210 07:53:36.005227 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.005240 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:36.005248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:36.005329 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:36.035275 1078428 cri.go:89] found id: ""
	I1210 07:53:36.035299 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.035307 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:36.035313 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:36.035380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:36.060232 1078428 cri.go:89] found id: ""
	I1210 07:53:36.060255 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.060266 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:36.060272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:36.060336 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:36.084825 1078428 cri.go:89] found id: ""
	I1210 07:53:36.084850 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.084859 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:36.084866 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:36.084955 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:36.110606 1078428 cri.go:89] found id: ""
	I1210 07:53:36.110630 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.110639 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:36.110664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:36.110728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:36.139205 1078428 cri.go:89] found id: ""
	I1210 07:53:36.139232 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.139241 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:36.139248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:36.139358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:36.165255 1078428 cri.go:89] found id: ""
	I1210 07:53:36.165279 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.165287 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:36.165296 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:36.165308 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:36.190967 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:36.191003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:36.228036 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:36.228070 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:36.283588 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:36.283626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:36.308631 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:36.308660 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:36.382721 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.882925 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.893611 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.893738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.919385 1078428 cri.go:89] found id: ""
	I1210 07:53:38.919418 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.919427 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.919433 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.919504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.943787 1078428 cri.go:89] found id: ""
	I1210 07:53:38.943814 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.943824 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.943832 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.943896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.968361 1078428 cri.go:89] found id: ""
	I1210 07:53:38.968433 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.968451 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.968458 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.968520 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.995636 1078428 cri.go:89] found id: ""
	I1210 07:53:38.995661 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.995670 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.995677 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.995754 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:39.021416 1078428 cri.go:89] found id: ""
	I1210 07:53:39.021452 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.021462 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:39.021470 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:39.021552 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:39.048415 1078428 cri.go:89] found id: ""
	I1210 07:53:39.048441 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.048450 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:39.048456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:39.048545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:39.074528 1078428 cri.go:89] found id: ""
	I1210 07:53:39.074554 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.074563 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:39.074569 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:39.074633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:39.099525 1078428 cri.go:89] found id: ""
	I1210 07:53:39.099551 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.099571 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:39.099581 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:39.099594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:39.166056 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:39.166080 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:39.166094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:39.191445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:39.191482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:39.221901 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:39.221931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:39.276698 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:39.276735 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:53:37.554212 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:40.054014 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:41.793231 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.806351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.806419 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.833486 1078428 cri.go:89] found id: ""
	I1210 07:53:41.833508 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.833517 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.833523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.833587 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.863627 1078428 cri.go:89] found id: ""
	I1210 07:53:41.863650 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.863659 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.863665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.863723 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.891468 1078428 cri.go:89] found id: ""
	I1210 07:53:41.891492 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.891502 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.891509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.891575 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.916517 1078428 cri.go:89] found id: ""
	I1210 07:53:41.916542 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.916550 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.916557 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.916616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.942528 1078428 cri.go:89] found id: ""
	I1210 07:53:41.942555 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.942577 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.942584 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.942646 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.966600 1078428 cri.go:89] found id: ""
	I1210 07:53:41.966624 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.966633 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.966639 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.966707 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.990797 1078428 cri.go:89] found id: ""
	I1210 07:53:41.990831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.990840 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.990846 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:41.990914 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:42.024121 1078428 cri.go:89] found id: ""
	I1210 07:53:42.024148 1078428 logs.go:282] 0 containers: []
	W1210 07:53:42.024158 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:42.024169 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:42.024181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:42.080753 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:42.080799 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:42.098930 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:42.098965 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:42.176005 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:42.176075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:42.176108 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:42.205998 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:42.206045 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:42.054513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:44.553993 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:44.740690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.751788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.751908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.777536 1078428 cri.go:89] found id: ""
	I1210 07:53:44.777563 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.777571 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.777578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.777640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.805133 1078428 cri.go:89] found id: ""
	I1210 07:53:44.805161 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.805170 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.805176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.805237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.842340 1078428 cri.go:89] found id: ""
	I1210 07:53:44.842368 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.842383 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.842390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.842451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.875009 1078428 cri.go:89] found id: ""
	I1210 07:53:44.875035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.875044 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.875050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.875144 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.900854 1078428 cri.go:89] found id: ""
	I1210 07:53:44.900880 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.900889 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.900895 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.900993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.926168 1078428 cri.go:89] found id: ""
	I1210 07:53:44.926194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.926203 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.926210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.926302 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.951565 1078428 cri.go:89] found id: ""
	I1210 07:53:44.951590 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.951599 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.951605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:44.951700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:44.981123 1078428 cri.go:89] found id: ""
	I1210 07:53:44.981151 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.981160 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:44.981170 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.981181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:45.061176 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:45.061213 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:45.061227 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:45.119245 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:45.119283 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:45.172398 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:45.172430 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:45.255583 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:45.255726 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.779428 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.790537 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.790611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.831579 1078428 cri.go:89] found id: ""
	I1210 07:53:47.831602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.831610 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.831617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.831677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.859808 1078428 cri.go:89] found id: ""
	I1210 07:53:47.859835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.859844 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.859850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.859916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.885720 1078428 cri.go:89] found id: ""
	I1210 07:53:47.885745 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.885754 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.885761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.885829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.910568 1078428 cri.go:89] found id: ""
	I1210 07:53:47.910594 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.910604 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.910610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.910668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.934447 1078428 cri.go:89] found id: ""
	I1210 07:53:47.934495 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.934505 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.934511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.934571 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.959745 1078428 cri.go:89] found id: ""
	I1210 07:53:47.959772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.959782 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.959788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.959871 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.984059 1078428 cri.go:89] found id: ""
	I1210 07:53:47.984085 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.984095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.984102 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:47.984163 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:48.011978 1078428 cri.go:89] found id: ""
	I1210 07:53:48.012007 1078428 logs.go:282] 0 containers: []
	W1210 07:53:48.012018 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:48.012030 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:48.012043 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:48.069700 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:48.069738 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:48.086303 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:48.086345 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:48.160973 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:48.160994 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:48.161008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:48.185832 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:48.185868 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:46.554777 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:49.054179 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:50.713469 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.724372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.724452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.750268 1078428 cri.go:89] found id: ""
	I1210 07:53:50.750292 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.750300 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.750306 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.750368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.776624 1078428 cri.go:89] found id: ""
	I1210 07:53:50.776689 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.776704 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.776711 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.776769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.807024 1078428 cri.go:89] found id: ""
	I1210 07:53:50.807051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.807060 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.807070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.807127 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.851753 1078428 cri.go:89] found id: ""
	I1210 07:53:50.851831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.851855 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.851879 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.852000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.878419 1078428 cri.go:89] found id: ""
	I1210 07:53:50.878571 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.878589 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.878597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.878667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.904710 1078428 cri.go:89] found id: ""
	I1210 07:53:50.904741 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.904750 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.904756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.904819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.929368 1078428 cri.go:89] found id: ""
	I1210 07:53:50.929398 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.929421 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.929428 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:50.929495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:50.956973 1078428 cri.go:89] found id: ""
	I1210 07:53:50.956998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.957006 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:50.957016 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:50.957028 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:50.982743 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.982778 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:51.015675 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:51.015706 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:51.072656 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:51.072697 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:51.089028 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:51.089115 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:51.156089 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.657305 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.668282 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.668364 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.693314 1078428 cri.go:89] found id: ""
	I1210 07:53:53.693340 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.693349 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.693356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.693417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.718128 1078428 cri.go:89] found id: ""
	I1210 07:53:53.718154 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.718169 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.718176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.718234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.744359 1078428 cri.go:89] found id: ""
	I1210 07:53:53.744397 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.744406 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.744412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.744485 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.773658 1078428 cri.go:89] found id: ""
	I1210 07:53:53.773737 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.773760 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.773782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.773879 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.804702 1078428 cri.go:89] found id: ""
	I1210 07:53:53.804772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.804796 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.804815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.804905 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.840639 1078428 cri.go:89] found id: ""
	I1210 07:53:53.840706 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.840730 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.840753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.840846 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.869303 1078428 cri.go:89] found id: ""
	I1210 07:53:53.869373 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.869397 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.869419 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:53.869508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:53.898651 1078428 cri.go:89] found id: ""
	I1210 07:53:53.898742 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.898764 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:53.898787 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:53.898821 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:53.924144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.924181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.953086 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.953118 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:54.008451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:54.008555 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:54.027281 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:54.027312 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:54.091065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:53:51.054819 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:53.554121 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:56.591259 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.602391 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.602493 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.627566 1078428 cri.go:89] found id: ""
	I1210 07:53:56.627597 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.627607 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.627614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.627677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.654900 1078428 cri.go:89] found id: ""
	I1210 07:53:56.654928 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.654937 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.654944 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.655007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.679562 1078428 cri.go:89] found id: ""
	I1210 07:53:56.679592 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.679606 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.679612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.679737 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.703320 1078428 cri.go:89] found id: ""
	I1210 07:53:56.703345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.703355 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.703361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.703420 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.731538 1078428 cri.go:89] found id: ""
	I1210 07:53:56.731564 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.731573 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.731579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.731664 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.756416 1078428 cri.go:89] found id: ""
	I1210 07:53:56.756442 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.756451 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.756457 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.756523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.785074 1078428 cri.go:89] found id: ""
	I1210 07:53:56.785097 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.785106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.785111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:56.785171 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:56.815793 1078428 cri.go:89] found id: ""
	I1210 07:53:56.815821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.815831 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:56.815842 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.815856 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.834351 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.834380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.907823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.907857 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:56.907871 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:56.933197 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.933233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.964346 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.964378 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:55.554659 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:58.054078 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:00.054143 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:59.520946 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.531324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.531414 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.563870 1078428 cri.go:89] found id: ""
	I1210 07:53:59.563897 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.563907 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.563913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.564000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.593355 1078428 cri.go:89] found id: ""
	I1210 07:53:59.593385 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.593394 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.593400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.593468 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.620235 1078428 cri.go:89] found id: ""
	I1210 07:53:59.620263 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.620272 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.620278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.620338 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.645074 1078428 cri.go:89] found id: ""
	I1210 07:53:59.645099 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.645108 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.645114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.645178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.673804 1078428 cri.go:89] found id: ""
	I1210 07:53:59.673830 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.673839 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.673845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.673902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.697766 1078428 cri.go:89] found id: ""
	I1210 07:53:59.697793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.697803 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.697810 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.697868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.725582 1078428 cri.go:89] found id: ""
	I1210 07:53:59.725608 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.725617 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.725623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:59.725681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:59.750402 1078428 cri.go:89] found id: ""
	I1210 07:53:59.750428 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.750437 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:59.750447 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:59.750458 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:59.775346 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.775383 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.815776 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.815804 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.876120 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.876164 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.897440 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.897470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.962486 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.463154 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.473950 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.474039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.498884 1078428 cri.go:89] found id: ""
	I1210 07:54:02.498907 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.498916 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.498923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.498982 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.523553 1078428 cri.go:89] found id: ""
	I1210 07:54:02.523582 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.523591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.523597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.552876 1078428 cri.go:89] found id: ""
	I1210 07:54:02.552902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.552911 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.552918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.552976 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.583793 1078428 cri.go:89] found id: ""
	I1210 07:54:02.583818 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.583827 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.583833 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.583895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.625932 1078428 cri.go:89] found id: ""
	I1210 07:54:02.625959 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.625969 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.625976 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.626044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.652709 1078428 cri.go:89] found id: ""
	I1210 07:54:02.652784 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.652800 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.652808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.652868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.680830 1078428 cri.go:89] found id: ""
	I1210 07:54:02.680859 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.680868 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.680874 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:02.680933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:02.706663 1078428 cri.go:89] found id: ""
	I1210 07:54:02.706687 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.706696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:02.706704 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.706715 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.763069 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.763105 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.779309 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.779340 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.864302 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.864326 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:02.864339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:02.890235 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.890274 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:02.554570 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:04.555006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:05.418128 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.429523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.429604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.456726 1078428 cri.go:89] found id: ""
	I1210 07:54:05.456755 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.456765 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.456772 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.456851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.485039 1078428 cri.go:89] found id: ""
	I1210 07:54:05.485065 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.485074 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.485080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.485169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.510634 1078428 cri.go:89] found id: ""
	I1210 07:54:05.510658 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.510668 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.510674 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.510733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.536710 1078428 cri.go:89] found id: ""
	I1210 07:54:05.536743 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.536753 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.536760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.536848 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.568911 1078428 cri.go:89] found id: ""
	I1210 07:54:05.568991 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.569015 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.569040 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.569150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.598888 1078428 cri.go:89] found id: ""
	I1210 07:54:05.598964 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.598987 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.599007 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.599101 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.630665 1078428 cri.go:89] found id: ""
	I1210 07:54:05.630741 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.630771 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.630779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:05.630850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:05.654676 1078428 cri.go:89] found id: ""
	I1210 07:54:05.654702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.654712 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:05.654722 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.654733 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.712685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.712722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.728743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.728774 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.807287 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.807311 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:05.807325 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:05.835209 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.835246 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.367017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.377830 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.377904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.402753 1078428 cri.go:89] found id: ""
	I1210 07:54:08.402778 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.402787 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.402795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.402856 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.427920 1078428 cri.go:89] found id: ""
	I1210 07:54:08.427947 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.427956 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.427963 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.428021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.453012 1078428 cri.go:89] found id: ""
	I1210 07:54:08.453037 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.453045 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.453052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.453114 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.477565 1078428 cri.go:89] found id: ""
	I1210 07:54:08.477591 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.477606 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.477612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.477673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.501669 1078428 cri.go:89] found id: ""
	I1210 07:54:08.501694 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.501740 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.501750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.501816 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.530594 1078428 cri.go:89] found id: ""
	I1210 07:54:08.530667 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.530704 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.530719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.530799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.561145 1078428 cri.go:89] found id: ""
	I1210 07:54:08.561171 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.561179 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.561186 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:08.561244 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:08.595663 1078428 cri.go:89] found id: ""
	I1210 07:54:08.595686 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.595695 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:08.595706 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:08.595718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:08.622963 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.623002 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.652801 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.652829 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.708272 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.708307 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.724144 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.724174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.790000 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:07.054035 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:09.054348 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:11.291584 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.302037 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.302111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.331607 1078428 cri.go:89] found id: ""
	I1210 07:54:11.331631 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.331640 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.331646 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.331711 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.355008 1078428 cri.go:89] found id: ""
	I1210 07:54:11.355031 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.355039 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.355045 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.355104 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.380347 1078428 cri.go:89] found id: ""
	I1210 07:54:11.380423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.380463 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.380485 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.380572 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.410797 1078428 cri.go:89] found id: ""
	I1210 07:54:11.410824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.410834 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.410840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.410898 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.435927 1078428 cri.go:89] found id: ""
	I1210 07:54:11.435996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.436021 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.436035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.436109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.461484 1078428 cri.go:89] found id: ""
	I1210 07:54:11.461520 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.461529 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.461536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.461603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.486793 1078428 cri.go:89] found id: ""
	I1210 07:54:11.486817 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.486825 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.486831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:11.486890 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:11.515338 1078428 cri.go:89] found id: ""
	I1210 07:54:11.515364 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.515374 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:11.515384 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.515396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.593473 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.593495 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:11.593509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:11.619492 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.619523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.646739 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.646771 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.701149 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.701187 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.217342 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:14.228228 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:14.228306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.254323 1078428 cri.go:89] found id: ""
	I1210 07:54:14.254360 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.254369 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.254375 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.254443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.279268 1078428 cri.go:89] found id: ""
	I1210 07:54:14.279295 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.279303 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.279310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.279397 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.304531 1078428 cri.go:89] found id: ""
	I1210 07:54:14.304558 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.304567 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.304574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.304647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.329458 1078428 cri.go:89] found id: ""
	I1210 07:54:14.329487 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.329496 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.329502 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.329563 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.359168 1078428 cri.go:89] found id: ""
	I1210 07:54:14.359241 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.359258 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.359266 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.359348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.386391 1078428 cri.go:89] found id: ""
	I1210 07:54:14.386426 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.386435 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.386442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.386540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.411808 1078428 cri.go:89] found id: ""
	I1210 07:54:14.411843 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.411862 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.411870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:14.411946 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:14.440262 1078428 cri.go:89] found id: ""
	I1210 07:54:14.440292 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.440301 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:14.440311 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.440322 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:54:11.553952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:13.554999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:14.496340 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.496376 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.512934 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.512963 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.584969 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.585042 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:14.585069 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:14.615045 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.615086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.146612 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:17.157236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:17.157307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:17.184080 1078428 cri.go:89] found id: ""
	I1210 07:54:17.184102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.184111 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:17.184117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:17.184177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:17.212720 1078428 cri.go:89] found id: ""
	I1210 07:54:17.212745 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.212754 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:17.212760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:17.212822 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.238495 1078428 cri.go:89] found id: ""
	I1210 07:54:17.238521 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.238529 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.238542 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.238603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.262892 1078428 cri.go:89] found id: ""
	I1210 07:54:17.262921 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.262930 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.262936 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.262996 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.291473 1078428 cri.go:89] found id: ""
	I1210 07:54:17.291498 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.291508 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.291514 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.291573 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.317108 1078428 cri.go:89] found id: ""
	I1210 07:54:17.317133 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.317142 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.317149 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.317209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.344918 1078428 cri.go:89] found id: ""
	I1210 07:54:17.344944 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.344953 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.344959 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:17.345019 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:17.370082 1078428 cri.go:89] found id: ""
	I1210 07:54:17.370109 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.370118 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:17.370128 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.370139 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:17.427357 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.427407 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.443363 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.443393 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.509516 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.509538 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:17.509551 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:17.535043 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.535078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:16.053965 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:18.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:20.071194 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:20.083928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:20.084059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:20.119958 1078428 cri.go:89] found id: ""
	I1210 07:54:20.119987 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.119996 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:20.120002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:20.120062 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:20.144861 1078428 cri.go:89] found id: ""
	I1210 07:54:20.144883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.144891 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:20.144897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:20.144957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:20.180042 1078428 cri.go:89] found id: ""
	I1210 07:54:20.180069 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.180078 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:20.180085 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:20.180151 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.208390 1078428 cri.go:89] found id: ""
	I1210 07:54:20.208423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.208432 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.208439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.208511 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.234337 1078428 cri.go:89] found id: ""
	I1210 07:54:20.234358 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.234367 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.234373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.234441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.263116 1078428 cri.go:89] found id: ""
	I1210 07:54:20.263138 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.263146 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.263153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.263213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.287115 1078428 cri.go:89] found id: ""
	I1210 07:54:20.287188 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.287203 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.287210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:20.287281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:20.312391 1078428 cri.go:89] found id: ""
	I1210 07:54:20.312415 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.312423 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:20.312432 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.312443 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.369802 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.369838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.387018 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.387099 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.458731 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.458801 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:20.458828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:20.483627 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.483662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:23.014658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:23.025123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:23.025235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:23.060798 1078428 cri.go:89] found id: ""
	I1210 07:54:23.060872 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.060909 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:23.060934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:23.061025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:23.092890 1078428 cri.go:89] found id: ""
	I1210 07:54:23.092965 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.092987 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:23.093018 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:23.093129 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:23.122215 1078428 cri.go:89] found id: ""
	I1210 07:54:23.122290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.122314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:23.122335 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:23.122418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:23.147080 1078428 cri.go:89] found id: ""
	I1210 07:54:23.147108 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.147117 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:23.147123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:23.147213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.171020 1078428 cri.go:89] found id: ""
	I1210 07:54:23.171043 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.171052 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.171064 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.171120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.195821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.195889 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.195914 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.195929 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.196016 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.219821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.219901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.219926 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.219941 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:23.220025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:23.248052 1078428 cri.go:89] found id: ""
	I1210 07:54:23.248079 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.248088 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:23.248098 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.248109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.305179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.305215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.321081 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.321111 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.391528 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.391553 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:23.391565 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:23.416476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.416509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:20.554048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:22.554698 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:24.554805 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:25.951859 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.962115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.962185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.986216 1078428 cri.go:89] found id: ""
	I1210 07:54:25.986286 1078428 logs.go:282] 0 containers: []
	W1210 07:54:25.986310 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.986334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.986426 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:26.011668 1078428 cri.go:89] found id: ""
	I1210 07:54:26.011696 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.011705 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:26.011712 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:26.011773 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:26.037538 1078428 cri.go:89] found id: ""
	I1210 07:54:26.037560 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.037569 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:26.037575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:26.037634 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:26.066974 1078428 cri.go:89] found id: ""
	I1210 07:54:26.066996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.067006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:26.067013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:26.067071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:26.100870 1078428 cri.go:89] found id: ""
	I1210 07:54:26.100892 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.100901 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:26.100907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:26.100966 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.130861 1078428 cri.go:89] found id: ""
	I1210 07:54:26.130883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.130891 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.130897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.130957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.156407 1078428 cri.go:89] found id: ""
	I1210 07:54:26.156429 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.156438 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.156444 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:26.156502 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:26.182081 1078428 cri.go:89] found id: ""
	I1210 07:54:26.182102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.182110 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:26.182119 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.182133 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.239878 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.239917 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.259189 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.259219 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.328449 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.328475 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:26.328490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:26.353246 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.353278 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.882607 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.893420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.893495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.917577 1078428 cri.go:89] found id: ""
	I1210 07:54:28.917603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.917611 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.917617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.917677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.949094 1078428 cri.go:89] found id: ""
	I1210 07:54:28.949123 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.949132 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.949138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.949202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.976683 1078428 cri.go:89] found id: ""
	I1210 07:54:28.976708 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.976716 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.976722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.976783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:29.001326 1078428 cri.go:89] found id: ""
	I1210 07:54:29.001395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.001420 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:29.001440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:29.001526 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:29.026870 1078428 cri.go:89] found id: ""
	I1210 07:54:29.026894 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.026903 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:29.026909 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:29.026992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:29.059072 1078428 cri.go:89] found id: ""
	I1210 07:54:29.059106 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.059115 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:29.059122 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:29.059190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:29.089329 1078428 cri.go:89] found id: ""
	I1210 07:54:29.089363 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.089372 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:29.089379 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:29.089446 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:29.116648 1078428 cri.go:89] found id: ""
	I1210 07:54:29.116671 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.116680 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:29.116689 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:29.116701 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:29.141429 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.141465 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.168073 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.168102 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:29.223128 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:29.223165 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:29.239118 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:29.239149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.304306 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:27.054859 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:29.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:31.805827 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.819227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.819305 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.852872 1078428 cri.go:89] found id: ""
	I1210 07:54:31.852901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.852910 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.852916 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.852973 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.881145 1078428 cri.go:89] found id: ""
	I1210 07:54:31.881173 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.881182 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.881188 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.881249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.907195 1078428 cri.go:89] found id: ""
	I1210 07:54:31.907218 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.907227 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.907233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.907292 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.931775 1078428 cri.go:89] found id: ""
	I1210 07:54:31.931799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.931808 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.931814 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.931876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.957735 1078428 cri.go:89] found id: ""
	I1210 07:54:31.957764 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.957772 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.957779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.957837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.982202 1078428 cri.go:89] found id: ""
	I1210 07:54:31.982285 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.982308 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.982334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.982441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:32.011091 1078428 cri.go:89] found id: ""
	I1210 07:54:32.011119 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.011129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:32.011138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:32.011205 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:32.039293 1078428 cri.go:89] found id: ""
	I1210 07:54:32.039371 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.039388 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:32.039399 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:32.039410 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:32.067441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.067482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:32.105238 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:32.105273 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.164873 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.164913 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.181394 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.181477 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.250195 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:32.054006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:34.054566 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:34.751129 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.761490 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.761559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.785680 1078428 cri.go:89] found id: ""
	I1210 07:54:34.785702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.785711 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.785716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.785775 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.820785 1078428 cri.go:89] found id: ""
	I1210 07:54:34.820809 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.820817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.820823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.820892 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.852508 1078428 cri.go:89] found id: ""
	I1210 07:54:34.852531 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.852539 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.852545 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.852604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.879064 1078428 cri.go:89] found id: ""
	I1210 07:54:34.879095 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.879104 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.879111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.879179 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.908815 1078428 cri.go:89] found id: ""
	I1210 07:54:34.908849 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.908858 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.908864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.908933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.939793 1078428 cri.go:89] found id: ""
	I1210 07:54:34.939820 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.939831 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.939838 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.939902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.966660 1078428 cri.go:89] found id: ""
	I1210 07:54:34.966730 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.966754 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.966775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:34.966877 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:34.997175 1078428 cri.go:89] found id: ""
	I1210 07:54:34.997202 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.997211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:34.997221 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:34.997233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:35.054362 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:35.054504 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:35.071310 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:35.071339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.154263 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.154285 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:35.154298 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:35.184377 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.184427 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:37.716479 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.727384 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.727475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.758151 1078428 cri.go:89] found id: ""
	I1210 07:54:37.758175 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.758183 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.758189 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.758249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.783547 1078428 cri.go:89] found id: ""
	I1210 07:54:37.783572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.783580 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.783586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.783652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.824269 1078428 cri.go:89] found id: ""
	I1210 07:54:37.824302 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.824320 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.824326 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.824392 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.859292 1078428 cri.go:89] found id: ""
	I1210 07:54:37.859315 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.859324 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.859332 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.859391 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.887370 1078428 cri.go:89] found id: ""
	I1210 07:54:37.887395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.887404 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.887411 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.887471 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.912568 1078428 cri.go:89] found id: ""
	I1210 07:54:37.912590 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.912599 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.912605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.912667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.942226 1078428 cri.go:89] found id: ""
	I1210 07:54:37.942294 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.942321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.942341 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:37.942416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:37.967116 1078428 cri.go:89] found id: ""
	I1210 07:54:37.967186 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.967211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:37.967234 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:37.967261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.026081 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.026123 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:38.044051 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:38.044086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:38.137383 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:38.137408 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:38.137420 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:38.163137 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.163174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:36.553998 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:38.554925 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:40.692712 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.705786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:40.705862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:40.730857 1078428 cri.go:89] found id: ""
	I1210 07:54:40.730881 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.730890 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:40.730896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:40.730956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:40.759374 1078428 cri.go:89] found id: ""
	I1210 07:54:40.759401 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.759410 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:40.759417 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:40.759481 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:40.784874 1078428 cri.go:89] found id: ""
	I1210 07:54:40.784898 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.784906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:40.784912 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:40.784972 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:40.829615 1078428 cri.go:89] found id: ""
	I1210 07:54:40.829638 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.829648 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:40.829655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:40.829714 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:40.855514 1078428 cri.go:89] found id: ""
	I1210 07:54:40.855537 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.855547 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:40.855553 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:40.855622 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:40.880645 1078428 cri.go:89] found id: ""
	I1210 07:54:40.880674 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.880683 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:40.880699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:40.880762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:40.908526 1078428 cri.go:89] found id: ""
	I1210 07:54:40.908553 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.908562 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:40.908568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:40.908627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:40.933389 1078428 cri.go:89] found id: ""
	I1210 07:54:40.933417 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.933427 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:40.933466 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:40.933485 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:40.989429 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:40.989508 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:41.005657 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:41.005748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:41.093001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:41.093075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:41.093107 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:41.120941 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:41.121022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:43.650332 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:43.660886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:43.660957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:43.685546 1078428 cri.go:89] found id: ""
	I1210 07:54:43.685572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.685582 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:43.685590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:43.685652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:43.710551 1078428 cri.go:89] found id: ""
	I1210 07:54:43.710575 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.710584 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:43.710590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:43.710651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:43.735321 1078428 cri.go:89] found id: ""
	I1210 07:54:43.735347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.735357 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:43.735363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:43.735422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:43.760265 1078428 cri.go:89] found id: ""
	I1210 07:54:43.760290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.760299 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:43.760305 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:43.760371 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:43.785386 1078428 cri.go:89] found id: ""
	I1210 07:54:43.785412 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.785421 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:43.785427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:43.785491 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:43.812278 1078428 cri.go:89] found id: ""
	I1210 07:54:43.812305 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.812323 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:43.812331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:43.812390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:43.844260 1078428 cri.go:89] found id: ""
	I1210 07:54:43.844288 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.844297 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:43.844303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:43.844374 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:43.878456 1078428 cri.go:89] found id: ""
	I1210 07:54:43.878503 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.878512 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:43.878522 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:43.878533 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:43.934467 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:43.934503 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:43.951761 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:43.951790 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:44.019672 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:44.019739 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:44.019764 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:44.045374 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:44.045448 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:41.053999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:43.054974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:45.055139 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:46.583553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:46.594544 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:46.594614 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:46.620989 1078428 cri.go:89] found id: ""
	I1210 07:54:46.621016 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.621026 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:46.621032 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:46.621092 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:46.646885 1078428 cri.go:89] found id: ""
	I1210 07:54:46.646912 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.646921 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:46.646927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:46.646993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:46.671522 1078428 cri.go:89] found id: ""
	I1210 07:54:46.671545 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.671555 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:46.671561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:46.671627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:46.697035 1078428 cri.go:89] found id: ""
	I1210 07:54:46.697057 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.697066 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:46.697076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:46.697135 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:46.721985 1078428 cri.go:89] found id: ""
	I1210 07:54:46.722008 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.722016 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:46.722023 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:46.722081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:46.750862 1078428 cri.go:89] found id: ""
	I1210 07:54:46.750885 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.750894 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:46.750900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:46.750957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:46.775321 1078428 cri.go:89] found id: ""
	I1210 07:54:46.775347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.775357 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:46.775363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:46.775422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:46.804576 1078428 cri.go:89] found id: ""
	I1210 07:54:46.804603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.804612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:46.804624 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:46.804635 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:46.869024 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:46.869059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:46.887039 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:46.887068 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:46.955257 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:46.955281 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:46.955294 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:46.981722 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:46.981766 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:47.553929 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:49.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:49.512895 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:49.523585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:49.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:49.553762 1078428 cri.go:89] found id: ""
	I1210 07:54:49.553799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.553809 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:49.553815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:49.553883 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:49.584365 1078428 cri.go:89] found id: ""
	I1210 07:54:49.584397 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.584406 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:49.584412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:49.584473 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:49.609054 1078428 cri.go:89] found id: ""
	I1210 07:54:49.609078 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.609088 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:49.609094 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:49.609153 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:49.633506 1078428 cri.go:89] found id: ""
	I1210 07:54:49.633585 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.633612 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:49.633632 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:49.633727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:49.660681 1078428 cri.go:89] found id: ""
	I1210 07:54:49.660705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.660713 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:49.660719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:49.660779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:49.684429 1078428 cri.go:89] found id: ""
	I1210 07:54:49.684456 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.684465 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:49.684472 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:49.684559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:49.708792 1078428 cri.go:89] found id: ""
	I1210 07:54:49.708825 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.708834 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:49.708841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:49.708907 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:49.733028 1078428 cri.go:89] found id: ""
	I1210 07:54:49.733061 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.733070 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:49.733080 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:49.733093 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:49.788419 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:49.788454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:49.806199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:49.806229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:49.890193 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:49.890216 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:49.890229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:49.916164 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:49.916201 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.445192 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:52.455938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:52.456011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:52.483578 1078428 cri.go:89] found id: ""
	I1210 07:54:52.483607 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.483615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:52.483622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:52.483681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:52.508996 1078428 cri.go:89] found id: ""
	I1210 07:54:52.509019 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.509028 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:52.509035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:52.509100 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:52.534163 1078428 cri.go:89] found id: ""
	I1210 07:54:52.534189 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.534197 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:52.534204 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:52.534262 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:52.559446 1078428 cri.go:89] found id: ""
	I1210 07:54:52.559468 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.559476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:52.559482 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:52.559538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:52.585685 1078428 cri.go:89] found id: ""
	I1210 07:54:52.585705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.585714 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:52.585720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:52.585781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:52.610362 1078428 cri.go:89] found id: ""
	I1210 07:54:52.610387 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.610396 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:52.610429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:52.610553 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:52.639114 1078428 cri.go:89] found id: ""
	I1210 07:54:52.639140 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.639149 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:52.639155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:52.639239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:52.669083 1078428 cri.go:89] found id: ""
	I1210 07:54:52.669111 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.669120 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:52.669129 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:52.669141 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:52.684926 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:52.684953 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:52.749001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:52.749025 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:52.749037 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:52.773227 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:52.773261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.804197 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:52.804276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:54:52.054720 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:54.555065 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:55.368759 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:55.379351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:55.379439 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:55.403912 1078428 cri.go:89] found id: ""
	I1210 07:54:55.403937 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.403946 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:55.403953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:55.404021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:55.432879 1078428 cri.go:89] found id: ""
	I1210 07:54:55.432902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.432912 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:55.432918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:55.432981 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:55.457499 1078428 cri.go:89] found id: ""
	I1210 07:54:55.457528 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.457537 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:55.457546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:55.457605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:55.482796 1078428 cri.go:89] found id: ""
	I1210 07:54:55.482824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.482833 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:55.482840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:55.482900 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:55.508135 1078428 cri.go:89] found id: ""
	I1210 07:54:55.508158 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.508167 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:55.508173 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:55.508239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:55.532757 1078428 cri.go:89] found id: ""
	I1210 07:54:55.532828 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.532849 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:55.532856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:55.532923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:55.558383 1078428 cri.go:89] found id: ""
	I1210 07:54:55.558408 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.558431 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:55.558437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:55.558540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:55.584737 1078428 cri.go:89] found id: ""
	I1210 07:54:55.584768 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.584780 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:55.584790 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:55.584802 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:55.611899 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:55.611929 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:55.667940 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:55.667974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:55.683872 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:55.683902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:55.753488 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:55.753511 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:55.753523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.279433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:58.290275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:58.290358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:58.315732 1078428 cri.go:89] found id: ""
	I1210 07:54:58.315760 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.315769 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:58.315775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:58.315840 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:58.354970 1078428 cri.go:89] found id: ""
	I1210 07:54:58.354993 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.355002 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:58.355009 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:58.355080 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:58.387261 1078428 cri.go:89] found id: ""
	I1210 07:54:58.387290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.387300 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:58.387307 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:58.387366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:58.415659 1078428 cri.go:89] found id: ""
	I1210 07:54:58.415683 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.415691 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:58.415698 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:58.415762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:58.440257 1078428 cri.go:89] found id: ""
	I1210 07:54:58.440283 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.440292 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:58.440298 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:58.440380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:58.465572 1078428 cri.go:89] found id: ""
	I1210 07:54:58.465598 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.465607 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:58.465614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:58.465672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:58.490288 1078428 cri.go:89] found id: ""
	I1210 07:54:58.490313 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.490321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:58.490327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:58.490384 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:58.516549 1078428 cri.go:89] found id: ""
	I1210 07:54:58.516572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.516580 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:58.516590 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:58.516601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.542195 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:58.542234 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:58.570592 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:58.570623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:58.627983 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:58.628020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:58.644192 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:58.644218 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:58.708892 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:57.053952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:59.054069 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:01.209184 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:01.221080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:01.221155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:01.250125 1078428 cri.go:89] found id: ""
	I1210 07:55:01.250154 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.250163 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:01.250178 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:01.250240 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:01.276827 1078428 cri.go:89] found id: ""
	I1210 07:55:01.276854 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.276869 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:01.276876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:01.276938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:01.311772 1078428 cri.go:89] found id: ""
	I1210 07:55:01.311808 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.311818 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:01.311824 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:01.311894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:01.344006 1078428 cri.go:89] found id: ""
	I1210 07:55:01.344042 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.344052 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:01.344059 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:01.344131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:01.370453 1078428 cri.go:89] found id: ""
	I1210 07:55:01.370508 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.370517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:01.370524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:01.370596 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:01.396784 1078428 cri.go:89] found id: ""
	I1210 07:55:01.396811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.396833 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:01.396840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:01.396925 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:01.427026 1078428 cri.go:89] found id: ""
	I1210 07:55:01.427053 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.427064 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:01.427076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:01.427145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:01.453716 1078428 cri.go:89] found id: ""
	I1210 07:55:01.453745 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.453755 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:01.453765 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:01.453787 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:01.483021 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:01.483048 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:01.538363 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:01.538402 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:01.555879 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:01.555912 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:01.624093 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:01.624120 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:01.624136 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.151461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:04.161982 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:04.162052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:04.187914 1078428 cri.go:89] found id: ""
	I1210 07:55:04.187940 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.187955 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:04.187961 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:04.188020 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:04.212016 1078428 cri.go:89] found id: ""
	I1210 07:55:04.212039 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.212048 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:04.212054 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:04.212113 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:04.237062 1078428 cri.go:89] found id: ""
	I1210 07:55:04.237088 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.237098 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:04.237107 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:04.237166 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:04.262844 1078428 cri.go:89] found id: ""
	I1210 07:55:04.262867 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.262876 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:04.262883 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:04.262943 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:04.288099 1078428 cri.go:89] found id: ""
	I1210 07:55:04.288125 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.288134 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:04.288140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:04.288198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:04.315819 1078428 cri.go:89] found id: ""
	I1210 07:55:04.315846 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.315855 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:04.315861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:04.315923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:04.349897 1078428 cri.go:89] found id: ""
	I1210 07:55:04.349919 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.349928 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:04.349934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:04.349992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:04.374228 1078428 cri.go:89] found id: ""
	I1210 07:55:04.374255 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.374264 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:04.374274 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:04.374285 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:04.430541 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:04.430576 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:04.446913 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:04.446947 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:01.054690 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:03.054791 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:04.519646 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:04.519667 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:04.519679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.545056 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:04.545097 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:07.074592 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:07.085572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:07.085640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:07.111394 1078428 cri.go:89] found id: ""
	I1210 07:55:07.111418 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.111426 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:07.111432 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:07.111497 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:07.135823 1078428 cri.go:89] found id: ""
	I1210 07:55:07.135848 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.135857 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:07.135864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:07.135923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:07.164275 1078428 cri.go:89] found id: ""
	I1210 07:55:07.164297 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.164306 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:07.164311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:07.164385 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:07.193334 1078428 cri.go:89] found id: ""
	I1210 07:55:07.193358 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.193367 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:07.193373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:07.193429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:07.217929 1078428 cri.go:89] found id: ""
	I1210 07:55:07.217955 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.217964 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:07.217970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:07.218032 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:07.243152 1078428 cri.go:89] found id: ""
	I1210 07:55:07.243176 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.243185 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:07.243191 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:07.243251 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:07.270888 1078428 cri.go:89] found id: ""
	I1210 07:55:07.270918 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.270927 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:07.270934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:07.270992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:07.304504 1078428 cri.go:89] found id: ""
	I1210 07:55:07.304531 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.304540 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:07.304549 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:07.304561 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:07.370744 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:07.370786 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:07.386532 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:07.386606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:07.450870 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:07.450892 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:07.450906 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:07.476441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:07.476476 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:05.554590 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:08.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:10.006374 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:10.031408 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:10.031500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:10.072527 1078428 cri.go:89] found id: ""
	I1210 07:55:10.072558 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.072568 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:10.072575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:10.072637 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:10.107560 1078428 cri.go:89] found id: ""
	I1210 07:55:10.107605 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.107615 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:10.107621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:10.107694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:10.138416 1078428 cri.go:89] found id: ""
	I1210 07:55:10.138441 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.138450 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:10.138456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:10.138547 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:10.163271 1078428 cri.go:89] found id: ""
	I1210 07:55:10.163294 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.163303 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:10.163309 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:10.163372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:10.193549 1078428 cri.go:89] found id: ""
	I1210 07:55:10.193625 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.193637 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:10.193664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:10.193766 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:10.225083 1078428 cri.go:89] found id: ""
	I1210 07:55:10.225169 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.225182 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:10.225212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:10.225307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:10.251042 1078428 cri.go:89] found id: ""
	I1210 07:55:10.251067 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.251082 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:10.251089 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:10.251175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:10.275656 1078428 cri.go:89] found id: ""
	I1210 07:55:10.275681 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.275690 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:10.275699 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:10.275711 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:10.335591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:10.335628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:10.352546 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:10.352577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:10.421057 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:10.421081 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:10.421094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:10.446445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:10.446578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:12.978285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:12.988877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:12.988951 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:13.014715 1078428 cri.go:89] found id: ""
	I1210 07:55:13.014738 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.014746 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:13.014753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:13.014812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:13.039187 1078428 cri.go:89] found id: ""
	I1210 07:55:13.039217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.039226 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:13.039231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:13.039293 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:13.079663 1078428 cri.go:89] found id: ""
	I1210 07:55:13.079687 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.079696 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:13.079702 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:13.079762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:13.116097 1078428 cri.go:89] found id: ""
	I1210 07:55:13.116118 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.116127 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:13.116133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:13.116190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:13.141856 1078428 cri.go:89] found id: ""
	I1210 07:55:13.141921 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.141946 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:13.141973 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:13.142049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:13.166245 1078428 cri.go:89] found id: ""
	I1210 07:55:13.166318 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.166341 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:13.166361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:13.166452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:13.190766 1078428 cri.go:89] found id: ""
	I1210 07:55:13.190790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.190799 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:13.190805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:13.190864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:13.218179 1078428 cri.go:89] found id: ""
	I1210 07:55:13.218217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.218227 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:13.218253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:13.218270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:13.234044 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:13.234082 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:13.303134 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:13.303158 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:13.303170 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:13.330980 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:13.331017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:13.358836 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:13.358865 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:10.554264 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:13.054017 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:15.055138 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:15.922613 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:15.933295 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:15.933370 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:15.958341 1078428 cri.go:89] found id: ""
	I1210 07:55:15.958364 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.958373 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:15.958378 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:15.958434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:15.983285 1078428 cri.go:89] found id: ""
	I1210 07:55:15.983309 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.983324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:15.983330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:15.983387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:16.008789 1078428 cri.go:89] found id: ""
	I1210 07:55:16.008816 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.008825 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:16.008831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:16.008926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:16.035859 1078428 cri.go:89] found id: ""
	I1210 07:55:16.035931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.035946 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:16.035955 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:16.036022 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:16.068655 1078428 cri.go:89] found id: ""
	I1210 07:55:16.068688 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.068697 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:16.068704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:16.068776 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:16.106754 1078428 cri.go:89] found id: ""
	I1210 07:55:16.106780 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.106790 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:16.106796 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:16.106862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:16.133097 1078428 cri.go:89] found id: ""
	I1210 07:55:16.133124 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.133133 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:16.133139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:16.133207 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:16.157892 1078428 cri.go:89] found id: ""
	I1210 07:55:16.157938 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.157947 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:16.157957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:16.157970 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:16.212808 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:16.212848 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:16.228781 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:16.228813 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:16.291789 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:16.291811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:16.291823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:16.319342 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:16.319380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:18.855190 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:18.865732 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:18.865807 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:18.889830 1078428 cri.go:89] found id: ""
	I1210 07:55:18.889855 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.889864 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:18.889871 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:18.889936 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:18.914345 1078428 cri.go:89] found id: ""
	I1210 07:55:18.914370 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.914379 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:18.914385 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:18.914444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:18.939221 1078428 cri.go:89] found id: ""
	I1210 07:55:18.939243 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.939253 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:18.939258 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:18.939316 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:18.967766 1078428 cri.go:89] found id: ""
	I1210 07:55:18.967788 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.967796 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:18.967803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:18.967867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:18.996962 1078428 cri.go:89] found id: ""
	I1210 07:55:18.996984 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.996992 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:18.996999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:18.997055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:19.023004 1078428 cri.go:89] found id: ""
	I1210 07:55:19.023031 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.023043 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:19.023052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:19.023115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:19.057510 1078428 cri.go:89] found id: ""
	I1210 07:55:19.057540 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.057549 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:19.057555 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:19.057611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:19.092862 1078428 cri.go:89] found id: ""
	I1210 07:55:19.092891 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.092900 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:19.092910 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:19.092921 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:19.150597 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:19.150632 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:19.166174 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:19.166252 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:19.232235 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:19.232259 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:19.232272 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:19.256392 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:19.256424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:17.554658 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:20.054087 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:21.783358 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:21.793821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:21.793896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:21.818542 1078428 cri.go:89] found id: ""
	I1210 07:55:21.818564 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.818573 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:21.818580 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:21.818639 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:21.842392 1078428 cri.go:89] found id: ""
	I1210 07:55:21.842414 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.842423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:21.842429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:21.842509 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:21.869909 1078428 cri.go:89] found id: ""
	I1210 07:55:21.869931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.869940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:21.869947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:21.870009 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:21.896175 1078428 cri.go:89] found id: ""
	I1210 07:55:21.896197 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.896206 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:21.896212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:21.896272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:21.924596 1078428 cri.go:89] found id: ""
	I1210 07:55:21.924672 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.924684 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:21.924691 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:21.924781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:21.952789 1078428 cri.go:89] found id: ""
	I1210 07:55:21.952811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.952820 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:21.952826 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:21.952885 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:21.978579 1078428 cri.go:89] found id: ""
	I1210 07:55:21.978603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.978611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:21.978617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:21.978678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:22.002801 1078428 cri.go:89] found id: ""
	I1210 07:55:22.002829 1078428 logs.go:282] 0 containers: []
	W1210 07:55:22.002838 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:22.002848 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:22.002866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:22.021034 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:22.021067 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:22.101183 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:22.101208 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:22.101223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:22.133557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:22.133593 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:22.160692 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:22.160719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:22.554004 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:25.054003 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:24.716616 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:24.727463 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:24.727545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:24.752976 1078428 cri.go:89] found id: ""
	I1210 07:55:24.753005 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.753014 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:24.753021 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:24.753081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:24.780812 1078428 cri.go:89] found id: ""
	I1210 07:55:24.780841 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.780850 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:24.780856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:24.780913 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:24.806877 1078428 cri.go:89] found id: ""
	I1210 07:55:24.806900 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.806909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:24.806915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:24.806979 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:24.836752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.836785 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.836795 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:24.836809 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:24.836876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:24.863110 1078428 cri.go:89] found id: ""
	I1210 07:55:24.863134 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.863143 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:24.863153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:24.863219 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:24.888190 1078428 cri.go:89] found id: ""
	I1210 07:55:24.888214 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.888223 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:24.888230 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:24.888289 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:24.912349 1078428 cri.go:89] found id: ""
	I1210 07:55:24.912383 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.912394 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:24.912400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:24.912462 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:24.937752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.937781 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.937790 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:24.937799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:24.937811 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:24.992892 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:24.992928 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:25.010173 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:25.010241 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:25.099629 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:25.099713 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:25.099746 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:25.131383 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:25.131423 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:27.663351 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:27.674757 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:27.674843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:27.704367 1078428 cri.go:89] found id: ""
	I1210 07:55:27.704400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.704409 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:27.704420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:27.704484 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:27.731740 1078428 cri.go:89] found id: ""
	I1210 07:55:27.731773 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.731783 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:27.731790 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:27.731852 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:27.761848 1078428 cri.go:89] found id: ""
	I1210 07:55:27.761871 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.761880 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:27.761886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:27.761952 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:27.789498 1078428 cri.go:89] found id: ""
	I1210 07:55:27.789527 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.789537 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:27.789543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:27.789603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:27.815293 1078428 cri.go:89] found id: ""
	I1210 07:55:27.815320 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.815335 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:27.815342 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:27.815401 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:27.840211 1078428 cri.go:89] found id: ""
	I1210 07:55:27.840238 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.840249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:27.840256 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:27.840320 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:27.866289 1078428 cri.go:89] found id: ""
	I1210 07:55:27.866313 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.866323 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:27.866329 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:27.866388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:27.892533 1078428 cri.go:89] found id: ""
	I1210 07:55:27.892560 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.892569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:27.892578 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:27.892590 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:27.952019 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:27.952063 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:27.969597 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:27.969631 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:28.035775 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:28.035802 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:28.035816 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:28.064304 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:28.064344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:27.054076 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:29.054524 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:30.599553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:30.609953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:30.610023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:30.634355 1078428 cri.go:89] found id: ""
	I1210 07:55:30.634384 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.634393 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:30.634400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:30.634460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:30.658396 1078428 cri.go:89] found id: ""
	I1210 07:55:30.658435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.658444 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:30.658450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:30.658540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:30.683976 1078428 cri.go:89] found id: ""
	I1210 07:55:30.684014 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.684023 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:30.684030 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:30.684099 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:30.708278 1078428 cri.go:89] found id: ""
	I1210 07:55:30.708302 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.708311 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:30.708317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:30.708376 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:30.733222 1078428 cri.go:89] found id: ""
	I1210 07:55:30.733253 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.733262 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:30.733269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:30.733368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:30.758588 1078428 cri.go:89] found id: ""
	I1210 07:55:30.758614 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.758623 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:30.758630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:30.758700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:30.783735 1078428 cri.go:89] found id: ""
	I1210 07:55:30.783802 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.783826 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:30.783841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:30.783910 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:30.807833 1078428 cri.go:89] found id: ""
	I1210 07:55:30.807859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.807867 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:30.807876 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:30.807888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:30.872941 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:30.872961 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:30.872975 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:30.899140 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:30.899181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:30.926302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:30.926333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:30.982513 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:30.982550 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.499017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:33.509596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:33.509669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:33.540057 1078428 cri.go:89] found id: ""
	I1210 07:55:33.540082 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.540090 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:33.540097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:33.540160 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:33.570955 1078428 cri.go:89] found id: ""
	I1210 07:55:33.570982 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.570991 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:33.570997 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:33.571056 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:33.605930 1078428 cri.go:89] found id: ""
	I1210 07:55:33.605958 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.605968 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:33.605974 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:33.606036 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:33.634909 1078428 cri.go:89] found id: ""
	I1210 07:55:33.634932 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.634941 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:33.634947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:33.635008 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:33.659844 1078428 cri.go:89] found id: ""
	I1210 07:55:33.659912 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.659927 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:33.659935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:33.659999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:33.684878 1078428 cri.go:89] found id: ""
	I1210 07:55:33.684902 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.684911 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:33.684918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:33.684983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:33.709473 1078428 cri.go:89] found id: ""
	I1210 07:55:33.709496 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.709505 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:33.709517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:33.709580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:33.736059 1078428 cri.go:89] found id: ""
	I1210 07:55:33.736086 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.736095 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:33.736105 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:33.736117 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:33.795512 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:33.795546 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.811254 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:33.811282 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:33.878126 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:33.878148 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:33.878163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:33.904005 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:33.904041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:31.054696 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:33.054864 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:36.431681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:36.442446 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:36.442546 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:36.466520 1078428 cri.go:89] found id: ""
	I1210 07:55:36.466544 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.466553 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:36.466559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:36.466616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:36.497280 1078428 cri.go:89] found id: ""
	I1210 07:55:36.497307 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.497316 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:36.497322 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:36.497382 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:36.526966 1078428 cri.go:89] found id: ""
	I1210 07:55:36.526988 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.526998 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:36.527003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:36.527067 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:36.566317 1078428 cri.go:89] found id: ""
	I1210 07:55:36.566342 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.566351 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:36.566357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:36.566432 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:36.598673 1078428 cri.go:89] found id: ""
	I1210 07:55:36.598699 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.598716 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:36.598722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:36.598795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:36.638514 1078428 cri.go:89] found id: ""
	I1210 07:55:36.638537 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.638545 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:36.638551 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:36.638621 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:36.663534 1078428 cri.go:89] found id: ""
	I1210 07:55:36.663603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.663623 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:36.663630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:36.663715 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:36.692427 1078428 cri.go:89] found id: ""
	I1210 07:55:36.692451 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.692461 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:36.692471 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:36.692482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:36.717965 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:36.718003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:36.749638 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:36.749668 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:36.806519 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:36.806562 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:36.823288 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:36.823315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:36.888077 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.389725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:39.400775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:39.400867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:39.426362 1078428 cri.go:89] found id: ""
	I1210 07:55:39.426389 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.426398 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:39.426407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:39.426555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:39.455943 1078428 cri.go:89] found id: ""
	I1210 07:55:39.455969 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.455978 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:39.455984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:39.456043 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:39.484097 1078428 cri.go:89] found id: ""
	I1210 07:55:39.484127 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.484142 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:39.484150 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:39.484209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1210 07:55:35.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:37.554652 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:40.054927 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:39.510381 1078428 cri.go:89] found id: ""
	I1210 07:55:39.510408 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.510417 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:39.510423 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:39.510508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:39.534754 1078428 cri.go:89] found id: ""
	I1210 07:55:39.534819 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.534838 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:39.534845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:39.534903 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:39.577369 1078428 cri.go:89] found id: ""
	I1210 07:55:39.577400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.577409 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:39.577416 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:39.577519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:39.607302 1078428 cri.go:89] found id: ""
	I1210 07:55:39.607329 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.607348 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:39.607355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:39.607429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:39.637231 1078428 cri.go:89] found id: ""
	I1210 07:55:39.637270 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.637282 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:39.637292 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:39.637305 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:39.694701 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:39.694745 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:39.711729 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:39.711761 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:39.777959 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.777980 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:39.777995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:39.802829 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:39.802869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:42.336278 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:42.348869 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:42.348958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:42.376684 1078428 cri.go:89] found id: ""
	I1210 07:55:42.376751 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.376766 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:42.376774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:42.376834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:42.401855 1078428 cri.go:89] found id: ""
	I1210 07:55:42.401881 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.401890 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:42.401897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:42.401956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:42.429508 1078428 cri.go:89] found id: ""
	I1210 07:55:42.429532 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.429541 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:42.429547 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:42.429605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:42.453954 1078428 cri.go:89] found id: ""
	I1210 07:55:42.453978 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.453988 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:42.453994 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:42.454052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:42.480307 1078428 cri.go:89] found id: ""
	I1210 07:55:42.480372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.480386 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:42.480393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:42.480465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:42.505157 1078428 cri.go:89] found id: ""
	I1210 07:55:42.505189 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.505198 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:42.505205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:42.505272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:42.530482 1078428 cri.go:89] found id: ""
	I1210 07:55:42.530505 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.530513 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:42.530520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:42.530580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:42.563929 1078428 cri.go:89] found id: ""
	I1210 07:55:42.563996 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.564019 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:42.564041 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:42.564081 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:42.627607 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:42.627645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:42.644032 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:42.644059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:42.709684 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:42.709704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:42.709717 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:42.735150 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:42.735190 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:42.554153 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:44.554944 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:45.263314 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:45.276890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:45.276965 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:45.320051 1078428 cri.go:89] found id: ""
	I1210 07:55:45.320079 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.320089 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:45.320096 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:45.320155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:45.357108 1078428 cri.go:89] found id: ""
	I1210 07:55:45.357143 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.357153 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:45.357159 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:45.357235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:45.386251 1078428 cri.go:89] found id: ""
	I1210 07:55:45.386281 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.386290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:45.386296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:45.386355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:45.411934 1078428 cri.go:89] found id: ""
	I1210 07:55:45.411960 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.411969 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:45.411975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:45.412034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:45.438194 1078428 cri.go:89] found id: ""
	I1210 07:55:45.438221 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.438236 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:45.438242 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:45.438299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:45.462840 1078428 cri.go:89] found id: ""
	I1210 07:55:45.462864 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.462874 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:45.462880 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:45.462938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:45.487271 1078428 cri.go:89] found id: ""
	I1210 07:55:45.487296 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.487304 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:45.487311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:45.487368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:45.512829 1078428 cri.go:89] found id: ""
	I1210 07:55:45.512859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.512868 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:45.512877 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:45.512888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:45.592088 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:45.592106 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:45.592119 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:45.625233 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:45.625268 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:45.653443 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:45.653475 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:45.708240 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:45.708280 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.225757 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:48.236296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:48.236369 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:48.261289 1078428 cri.go:89] found id: ""
	I1210 07:55:48.261312 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.261320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:48.261337 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:48.261400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:48.286722 1078428 cri.go:89] found id: ""
	I1210 07:55:48.286746 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.286755 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:48.286761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:48.286819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:48.322426 1078428 cri.go:89] found id: ""
	I1210 07:55:48.322453 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.322484 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:48.322507 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:48.322588 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:48.351023 1078428 cri.go:89] found id: ""
	I1210 07:55:48.351052 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.351062 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:48.351068 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:48.351126 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:48.378519 1078428 cri.go:89] found id: ""
	I1210 07:55:48.378542 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.378550 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:48.378556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:48.378616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:48.403355 1078428 cri.go:89] found id: ""
	I1210 07:55:48.403382 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.403392 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:48.403398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:48.403478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:48.427960 1078428 cri.go:89] found id: ""
	I1210 07:55:48.427986 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.427995 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:48.428001 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:48.428059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:48.451603 1078428 cri.go:89] found id: ""
	I1210 07:55:48.451670 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.451696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:48.451714 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:48.451727 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:48.506052 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:48.506088 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.523423 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:48.523453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:48.594581 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:48.594606 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:48.594619 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:48.622945 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:48.622982 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:47.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:49.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:51.154448 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:51.165850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:51.165926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:51.191582 1078428 cri.go:89] found id: ""
	I1210 07:55:51.191607 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.191615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:51.191622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:51.191681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:51.216289 1078428 cri.go:89] found id: ""
	I1210 07:55:51.216314 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.216324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:51.216331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:51.216390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:51.245299 1078428 cri.go:89] found id: ""
	I1210 07:55:51.245324 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.245333 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:51.245339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:51.245400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:51.269348 1078428 cri.go:89] found id: ""
	I1210 07:55:51.269372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.269380 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:51.269387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:51.269443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:51.296327 1078428 cri.go:89] found id: ""
	I1210 07:55:51.296350 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.296360 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:51.296367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:51.296433 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:51.326976 1078428 cri.go:89] found id: ""
	I1210 07:55:51.326997 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.327005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:51.327011 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:51.327069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:51.360781 1078428 cri.go:89] found id: ""
	I1210 07:55:51.360857 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.360873 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:51.360881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:51.360960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:51.384754 1078428 cri.go:89] found id: ""
	I1210 07:55:51.384779 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.384788 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:51.384799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:51.384810 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:51.443446 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:51.443483 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:51.461527 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:51.461559 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:51.529060 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:51.529096 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:51.529109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:51.561037 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:51.561354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:54.111711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:54.122707 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:54.122781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:54.152821 1078428 cri.go:89] found id: ""
	I1210 07:55:54.152853 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.152867 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:54.152878 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:54.152961 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:54.180559 1078428 cri.go:89] found id: ""
	I1210 07:55:54.180583 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.180591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:54.180598 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:54.180662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:54.208251 1078428 cri.go:89] found id: ""
	I1210 07:55:54.208276 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.208285 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:54.208292 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:54.208349 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:54.233630 1078428 cri.go:89] found id: ""
	I1210 07:55:54.233655 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.233664 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:54.233670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:54.233727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:54.258409 1078428 cri.go:89] found id: ""
	I1210 07:55:54.258435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.258443 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:54.258450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:54.258533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:54.282200 1078428 cri.go:89] found id: ""
	I1210 07:55:54.282234 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.282242 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:54.282248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:54.282306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:54.326329 1078428 cri.go:89] found id: ""
	I1210 07:55:54.326352 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.326361 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:54.326367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:54.326428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:54.353371 1078428 cri.go:89] found id: ""
	I1210 07:55:54.353396 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.353405 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:54.353415 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:54.353429 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:54.412987 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:54.413025 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:54.429633 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:54.429718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:51.553930 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:54.497491 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:54.497530 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:54.497544 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:54.523210 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:54.523247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.066626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:57.077561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:57.077642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:57.102249 1078428 cri.go:89] found id: ""
	I1210 07:55:57.102273 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.102282 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:57.102289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:57.102352 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:57.126387 1078428 cri.go:89] found id: ""
	I1210 07:55:57.126413 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.126421 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:57.126427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:57.126506 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:57.151315 1078428 cri.go:89] found id: ""
	I1210 07:55:57.151341 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.151351 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:57.151357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:57.151417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:57.180045 1078428 cri.go:89] found id: ""
	I1210 07:55:57.180074 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.180083 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:57.180090 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:57.180150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:57.205199 1078428 cri.go:89] found id: ""
	I1210 07:55:57.205225 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.205233 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:57.205240 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:57.205299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:57.233971 1078428 cri.go:89] found id: ""
	I1210 07:55:57.233999 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.234009 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:57.234015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:57.234078 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:57.258568 1078428 cri.go:89] found id: ""
	I1210 07:55:57.258594 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.258604 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:57.258610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:57.258668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:57.282764 1078428 cri.go:89] found id: ""
	I1210 07:55:57.282790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.282800 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:57.282810 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:57.282823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:57.299427 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:57.299453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:57.374740 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:57.374810 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:57.374851 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:57.400786 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:57.400822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.427735 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:57.427767 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:56.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:58.054190 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:00.055015 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:59.984110 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:59.994599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:59.994677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:00.044693 1078428 cri.go:89] found id: ""
	I1210 07:56:00.044863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.044893 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:00.044928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:00.045024 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:00.118046 1078428 cri.go:89] found id: ""
	I1210 07:56:00.118124 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.118150 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:00.118171 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:00.119167 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:00.182111 1078428 cri.go:89] found id: ""
	I1210 07:56:00.182136 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.182145 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:00.182152 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:00.182960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:00.239971 1078428 cri.go:89] found id: ""
	I1210 07:56:00.239996 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.240006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:00.240013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:00.240085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:00.287888 1078428 cri.go:89] found id: ""
	I1210 07:56:00.287927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.287937 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:00.287945 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:00.288014 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:00.352509 1078428 cri.go:89] found id: ""
	I1210 07:56:00.352556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.352566 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:00.352593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:00.352712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:00.421383 1078428 cri.go:89] found id: ""
	I1210 07:56:00.421421 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.421430 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:00.421437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:00.421521 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:00.456737 1078428 cri.go:89] found id: ""
	I1210 07:56:00.456766 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.456776 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:00.456786 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:00.456803 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:00.539348 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:00.539370 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:00.539385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:00.569574 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:00.569616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:00.613655 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:00.613680 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:00.671124 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:00.671163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.187739 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:03.198133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:03.198208 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:03.223791 1078428 cri.go:89] found id: ""
	I1210 07:56:03.223818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.223828 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:03.223834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:03.223894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:03.248620 1078428 cri.go:89] found id: ""
	I1210 07:56:03.248644 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.248653 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:03.248659 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:03.248720 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:03.273951 1078428 cri.go:89] found id: ""
	I1210 07:56:03.273975 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.273985 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:03.273991 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:03.274053 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:03.300277 1078428 cri.go:89] found id: ""
	I1210 07:56:03.300300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.300309 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:03.300315 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:03.300372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:03.332941 1078428 cri.go:89] found id: ""
	I1210 07:56:03.332967 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.332977 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:03.332983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:03.333038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:03.367066 1078428 cri.go:89] found id: ""
	I1210 07:56:03.367091 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.367100 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:03.367106 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:03.367164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:03.391075 1078428 cri.go:89] found id: ""
	I1210 07:56:03.391098 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.391106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:03.391112 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:03.391170 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:03.415021 1078428 cri.go:89] found id: ""
	I1210 07:56:03.415049 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.415058 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:03.415068 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:03.415079 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:03.440424 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:03.440470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:03.468290 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:03.468319 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:03.525567 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:03.525601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.541470 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:03.541505 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:03.626098 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:56:02.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:05.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:06.126647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:06.137759 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:06.137831 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:06.163154 1078428 cri.go:89] found id: ""
	I1210 07:56:06.163181 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.163191 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:06.163198 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:06.163265 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:06.192495 1078428 cri.go:89] found id: ""
	I1210 07:56:06.192521 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.192530 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:06.192536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:06.192615 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:06.220976 1078428 cri.go:89] found id: ""
	I1210 07:56:06.221009 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.221017 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:06.221025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:06.221134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:06.246400 1078428 cri.go:89] found id: ""
	I1210 07:56:06.246427 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.246436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:06.246442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:06.246523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:06.272644 1078428 cri.go:89] found id: ""
	I1210 07:56:06.272667 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.272675 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:06.272681 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:06.272738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:06.300567 1078428 cri.go:89] found id: ""
	I1210 07:56:06.300636 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.300648 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:06.300655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:06.300726 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:06.332683 1078428 cri.go:89] found id: ""
	I1210 07:56:06.332750 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.332773 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:06.332795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:06.332881 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:06.366018 1078428 cri.go:89] found id: ""
	I1210 07:56:06.366099 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.366124 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:06.366149 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:06.366177 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:06.422922 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:06.422958 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:06.439199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:06.439231 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:06.512644 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:06.512669 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:06.512682 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:06.537590 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:06.537625 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:09.085608 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:09.095930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:09.096006 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:09.119422 1078428 cri.go:89] found id: ""
	I1210 07:56:09.119445 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.119454 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:09.119460 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:09.119518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:09.145193 1078428 cri.go:89] found id: ""
	I1210 07:56:09.145220 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.145230 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:09.145236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:09.145296 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:09.170538 1078428 cri.go:89] found id: ""
	I1210 07:56:09.170567 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.170576 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:09.170582 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:09.170640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:09.199713 1078428 cri.go:89] found id: ""
	I1210 07:56:09.199741 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.199749 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:09.199756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:09.199815 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:09.224005 1078428 cri.go:89] found id: ""
	I1210 07:56:09.224037 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.224046 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:09.224053 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:09.224112 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:09.254251 1078428 cri.go:89] found id: ""
	I1210 07:56:09.254273 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.254283 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:09.254290 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:09.254348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:09.280458 1078428 cri.go:89] found id: ""
	I1210 07:56:09.280484 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.280493 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:09.280500 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:09.280565 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:09.320912 1078428 cri.go:89] found id: ""
	I1210 07:56:09.320943 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.320952 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:09.320961 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:09.320974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:09.386817 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:09.386854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:09.402878 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:09.402954 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:09.472013 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:09.472092 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:09.472114 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:56:07.054571 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:09.054701 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:09.497983 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:09.498020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.030207 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:12.040966 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:12.041087 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:12.069314 1078428 cri.go:89] found id: ""
	I1210 07:56:12.069346 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.069356 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:12.069362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:12.069424 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:12.096321 1078428 cri.go:89] found id: ""
	I1210 07:56:12.096400 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.096423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:12.096438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:12.096519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:12.122859 1078428 cri.go:89] found id: ""
	I1210 07:56:12.122887 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.122896 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:12.122903 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:12.122985 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:12.148481 1078428 cri.go:89] found id: ""
	I1210 07:56:12.148505 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.148514 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:12.148520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:12.148633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:12.172954 1078428 cri.go:89] found id: ""
	I1210 07:56:12.172978 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.172995 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:12.173003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:12.173063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:12.198414 1078428 cri.go:89] found id: ""
	I1210 07:56:12.198436 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.198446 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:12.198453 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:12.198530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:12.227549 1078428 cri.go:89] found id: ""
	I1210 07:56:12.227576 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.227586 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:12.227592 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:12.227651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:12.255277 1078428 cri.go:89] found id: ""
	I1210 07:56:12.255300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.255309 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:12.255318 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:12.255330 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:12.343072 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:12.343095 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:12.343109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:12.370845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:12.370884 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.401190 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:12.401217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:12.456146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:12.456181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:56:11.554344 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:13.554843 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:14.972152 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:14.983046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:14.983121 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:15.031099 1078428 cri.go:89] found id: ""
	I1210 07:56:15.031183 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.031217 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:15.031260 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:15.031373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:15.061619 1078428 cri.go:89] found id: ""
	I1210 07:56:15.061646 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.061655 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:15.061662 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:15.061728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:15.088678 1078428 cri.go:89] found id: ""
	I1210 07:56:15.088701 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.088709 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:15.088716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:15.088781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:15.118776 1078428 cri.go:89] found id: ""
	I1210 07:56:15.118854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.118872 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:15.118881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:15.118945 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:15.144691 1078428 cri.go:89] found id: ""
	I1210 07:56:15.144717 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.144727 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:15.144734 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:15.144799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:15.169827 1078428 cri.go:89] found id: ""
	I1210 07:56:15.169854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.169863 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:15.169870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:15.169927 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:15.196425 1078428 cri.go:89] found id: ""
	I1210 07:56:15.196459 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.196468 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:15.196474 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:15.196533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:15.221736 1078428 cri.go:89] found id: ""
	I1210 07:56:15.221763 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.221772 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:15.221782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:15.221794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:15.237860 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:15.237890 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:15.309823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:15.309847 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:15.309860 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:15.342939 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:15.342990 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:15.376812 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:15.376839 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:17.934235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:17.945317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:17.945396 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:17.971659 1078428 cri.go:89] found id: ""
	I1210 07:56:17.971685 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.971694 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:17.971700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:17.971753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:17.996434 1078428 cri.go:89] found id: ""
	I1210 07:56:17.996476 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.996488 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:17.996495 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:17.996560 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:18.024303 1078428 cri.go:89] found id: ""
	I1210 07:56:18.024338 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.024347 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:18.024354 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:18.024416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:18.049317 1078428 cri.go:89] found id: ""
	I1210 07:56:18.049344 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.049353 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:18.049360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:18.049421 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:18.079586 1078428 cri.go:89] found id: ""
	I1210 07:56:18.079611 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.079620 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:18.079627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:18.079686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:18.108486 1078428 cri.go:89] found id: ""
	I1210 07:56:18.108511 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.108519 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:18.108526 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:18.108601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:18.137645 1078428 cri.go:89] found id: ""
	I1210 07:56:18.137671 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.137680 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:18.137686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:18.137767 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:18.161838 1078428 cri.go:89] found id: ""
	I1210 07:56:18.161863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.161874 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:18.161883 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:18.161916 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:18.235505 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:18.235526 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:18.235539 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:18.260551 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:18.260589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:18.288267 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:18.288296 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:18.349132 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:18.349215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:56:16.054030 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:18.054084 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:20.868569 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:20.879574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:20.879649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:20.904201 1078428 cri.go:89] found id: ""
	I1210 07:56:20.904226 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.904235 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:20.904241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:20.904299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:20.929396 1078428 cri.go:89] found id: ""
	I1210 07:56:20.929423 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.929432 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:20.929439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:20.929514 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:20.954953 1078428 cri.go:89] found id: ""
	I1210 07:56:20.954984 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.954993 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:20.954999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:20.955058 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:20.978741 1078428 cri.go:89] found id: ""
	I1210 07:56:20.978767 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.978776 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:20.978782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:20.978841 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:21.003286 1078428 cri.go:89] found id: ""
	I1210 07:56:21.003313 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.003323 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:21.003330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:21.003402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:21.034505 1078428 cri.go:89] found id: ""
	I1210 07:56:21.034527 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.034536 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:21.034543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:21.034605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:21.058861 1078428 cri.go:89] found id: ""
	I1210 07:56:21.058885 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.058894 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:21.058900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:21.058958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:21.082740 1078428 cri.go:89] found id: ""
	I1210 07:56:21.082764 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.082773 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:21.082782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:21.082794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:21.098247 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:21.098276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:21.161962 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:21.161982 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:21.161995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:21.187272 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:21.187314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:21.214180 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:21.214213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:23.769450 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:23.780372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:23.780505 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:23.817607 1078428 cri.go:89] found id: ""
	I1210 07:56:23.817631 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.817641 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:23.817648 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:23.817709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:23.848903 1078428 cri.go:89] found id: ""
	I1210 07:56:23.848927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.848949 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:23.848960 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:23.849023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:23.877281 1078428 cri.go:89] found id: ""
	I1210 07:56:23.877305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.877314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:23.877320 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:23.877387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:23.903972 1078428 cri.go:89] found id: ""
	I1210 07:56:23.903997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.904006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:23.904013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:23.904089 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:23.929481 1078428 cri.go:89] found id: ""
	I1210 07:56:23.929508 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.929517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:23.929525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:23.929586 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:23.954626 1078428 cri.go:89] found id: ""
	I1210 07:56:23.954665 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.954676 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:23.954683 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:23.954785 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:23.980069 1078428 cri.go:89] found id: ""
	I1210 07:56:23.980102 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.980111 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:23.980117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:23.980176 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:24.005963 1078428 cri.go:89] found id: ""
	I1210 07:56:24.005987 1078428 logs.go:282] 0 containers: []
	W1210 07:56:24.005996 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:24.006006 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:24.006017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:24.036028 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:24.036065 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:24.065541 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:24.065571 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:24.126584 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:24.126630 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:24.143358 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:24.143391 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:24.208974 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:56:20.554242 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:22.554679 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:25.054999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:26.710619 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:26.721267 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:26.721343 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:26.746073 1078428 cri.go:89] found id: ""
	I1210 07:56:26.746100 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.746109 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:26.746115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:26.746178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:26.772432 1078428 cri.go:89] found id: ""
	I1210 07:56:26.772456 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.772472 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:26.772479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:26.772538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:26.809928 1078428 cri.go:89] found id: ""
	I1210 07:56:26.809954 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.809964 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:26.809970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:26.810026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:26.837500 1078428 cri.go:89] found id: ""
	I1210 07:56:26.837522 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.837531 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:26.837538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:26.837592 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:26.864667 1078428 cri.go:89] found id: ""
	I1210 07:56:26.864693 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.864702 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:26.864708 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:26.864768 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:26.892330 1078428 cri.go:89] found id: ""
	I1210 07:56:26.892359 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.892368 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:26.892374 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:26.892457 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:26.916781 1078428 cri.go:89] found id: ""
	I1210 07:56:26.916807 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.916815 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:26.916822 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:26.916902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:26.945103 1078428 cri.go:89] found id: ""
	I1210 07:56:26.945128 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.945137 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:26.945147 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:26.945178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:27.001893 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:27.001933 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:27.020119 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:27.020149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:27.092626 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:27.092690 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:27.092712 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:27.118838 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:27.118873 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:27.554852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:29.554968 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:29.646997 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:29.659058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:29.659139 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:29.684417 1078428 cri.go:89] found id: ""
	I1210 07:56:29.684442 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.684452 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:29.684459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:29.684532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:29.713716 1078428 cri.go:89] found id: ""
	I1210 07:56:29.713747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.713756 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:29.713762 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:29.713829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:29.742671 1078428 cri.go:89] found id: ""
	I1210 07:56:29.742747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.742761 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:29.742769 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:29.742834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:29.767461 1078428 cri.go:89] found id: ""
	I1210 07:56:29.767488 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.767497 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:29.767503 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:29.767590 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:29.791629 1078428 cri.go:89] found id: ""
	I1210 07:56:29.791655 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.791664 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:29.791670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:29.791728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:29.822213 1078428 cri.go:89] found id: ""
	I1210 07:56:29.822240 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.822249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:29.822255 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:29.822317 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:29.854606 1078428 cri.go:89] found id: ""
	I1210 07:56:29.854633 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.854643 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:29.854649 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:29.854709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:29.880033 1078428 cri.go:89] found id: ""
	I1210 07:56:29.880059 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.880068 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:29.880077 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:29.880090 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:29.948475 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:29.948498 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:29.948512 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:29.974136 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:29.974171 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:30.013967 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:30.014008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:30.097748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:30.097788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.617610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:32.628661 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:32.628735 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:32.652564 1078428 cri.go:89] found id: ""
	I1210 07:56:32.652594 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.652603 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:32.652610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:32.652668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:32.680277 1078428 cri.go:89] found id: ""
	I1210 07:56:32.680302 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.680310 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:32.680317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:32.680379 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:32.704183 1078428 cri.go:89] found id: ""
	I1210 07:56:32.704207 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.704216 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:32.704222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:32.704285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:32.729141 1078428 cri.go:89] found id: ""
	I1210 07:56:32.729165 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.729174 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:32.729180 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:32.729237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:32.753460 1078428 cri.go:89] found id: ""
	I1210 07:56:32.753482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.753490 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:32.753496 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:32.753562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:32.781036 1078428 cri.go:89] found id: ""
	I1210 07:56:32.781061 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.781069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:32.781076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:32.781131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:32.816565 1078428 cri.go:89] found id: ""
	I1210 07:56:32.816586 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.816594 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:32.816599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:32.816655 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:32.848807 1078428 cri.go:89] found id: ""
	I1210 07:56:32.848832 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.848841 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:32.848849 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:32.848861 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:32.908343 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:32.908379 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.924367 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:32.924396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:32.994542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:32.994565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:32.994581 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:33.024802 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:33.024842 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:32.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:34.554950 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:35.557491 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:35.568723 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:35.568795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:35.601157 1078428 cri.go:89] found id: ""
	I1210 07:56:35.601184 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.601193 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:35.601200 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:35.601260 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:35.628459 1078428 cri.go:89] found id: ""
	I1210 07:56:35.628494 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.628503 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:35.628509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:35.628570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:35.656310 1078428 cri.go:89] found id: ""
	I1210 07:56:35.656332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.656342 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:35.656348 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:35.656404 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:35.680954 1078428 cri.go:89] found id: ""
	I1210 07:56:35.680980 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.680992 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:35.680998 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:35.681055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:35.708548 1078428 cri.go:89] found id: ""
	I1210 07:56:35.708575 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.708584 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:35.708590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:35.708648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:35.736013 1078428 cri.go:89] found id: ""
	I1210 07:56:35.736040 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.736049 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:35.736056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:35.736124 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:35.760465 1078428 cri.go:89] found id: ""
	I1210 07:56:35.760495 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.760504 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:35.760511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:35.760574 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:35.785429 1078428 cri.go:89] found id: ""
	I1210 07:56:35.785451 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.785460 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:35.785469 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:35.785481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:35.871280 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:35.871302 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:35.871315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:35.897087 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:35.897124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:35.925107 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:35.925134 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:35.981188 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:35.981270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.499048 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:38.509835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:38.509908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:38.534615 1078428 cri.go:89] found id: ""
	I1210 07:56:38.534637 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.534645 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:38.534652 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:38.534708 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:38.576309 1078428 cri.go:89] found id: ""
	I1210 07:56:38.576332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.576341 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:38.576347 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:38.576407 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:38.611259 1078428 cri.go:89] found id: ""
	I1210 07:56:38.611281 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.611290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:38.611297 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:38.611357 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:38.637583 1078428 cri.go:89] found id: ""
	I1210 07:56:38.637612 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.637621 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:38.637627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:38.637686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:38.662187 1078428 cri.go:89] found id: ""
	I1210 07:56:38.662267 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.662290 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:38.662310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:38.662402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:38.686838 1078428 cri.go:89] found id: ""
	I1210 07:56:38.686861 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.686869 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:38.686876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:38.686933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:38.710788 1078428 cri.go:89] found id: ""
	I1210 07:56:38.710815 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.710824 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:38.710831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:38.710930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:38.736531 1078428 cri.go:89] found id: ""
	I1210 07:56:38.736556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.736565 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:38.736575 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:38.736589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.752335 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:38.752364 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:38.826607 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:38.826675 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:38.826688 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:38.854204 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:38.854240 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:38.883619 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:38.883647 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:37.054712 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:39.554110 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:41.439316 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:41.450451 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:41.450532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:41.476998 1078428 cri.go:89] found id: ""
	I1210 07:56:41.477022 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.477030 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:41.477036 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:41.477096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:41.502043 1078428 cri.go:89] found id: ""
	I1210 07:56:41.502069 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.502078 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:41.502084 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:41.502145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:41.526905 1078428 cri.go:89] found id: ""
	I1210 07:56:41.526931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.526940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:41.526947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:41.527007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:41.558750 1078428 cri.go:89] found id: ""
	I1210 07:56:41.558779 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.558788 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:41.558795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:41.558851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:41.596637 1078428 cri.go:89] found id: ""
	I1210 07:56:41.596664 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.596674 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:41.596680 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:41.596742 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:41.622316 1078428 cri.go:89] found id: ""
	I1210 07:56:41.622340 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.622348 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:41.622355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:41.622418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:41.648410 1078428 cri.go:89] found id: ""
	I1210 07:56:41.648482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.648511 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:41.648518 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:41.648581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:41.680776 1078428 cri.go:89] found id: ""
	I1210 07:56:41.680802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.680811 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:41.680820 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:41.680832 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:41.708185 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:41.708211 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:41.767625 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:41.767662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:41.784949 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:41.784980 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:41.871610 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:41.871632 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:41.871645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.398611 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:44.408733 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:44.408806 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:44.432507 1078428 cri.go:89] found id: ""
	I1210 07:56:44.432531 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.432540 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:44.432546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:44.432607 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:44.457597 1078428 cri.go:89] found id: ""
	I1210 07:56:44.457622 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.457631 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:44.457637 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:44.457697 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:44.485123 1078428 cri.go:89] found id: ""
	I1210 07:56:44.485149 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.485158 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:44.485165 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:44.485228 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1210 07:56:42.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:44.054891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:44.510813 1078428 cri.go:89] found id: ""
	I1210 07:56:44.510848 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.510857 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:44.510870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:44.510929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:44.534504 1078428 cri.go:89] found id: ""
	I1210 07:56:44.534528 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.534537 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:44.534543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:44.534600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:44.574866 1078428 cri.go:89] found id: ""
	I1210 07:56:44.574940 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.574962 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:44.574983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:44.575074 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:44.605450 1078428 cri.go:89] found id: ""
	I1210 07:56:44.605523 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.605546 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:44.605566 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:44.605652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:44.633965 1078428 cri.go:89] found id: ""
	I1210 07:56:44.634039 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.634064 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:44.634087 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:44.634124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:44.692591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:44.692628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:44.708687 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:44.708718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:44.774532 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:44.774581 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:44.774594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.801145 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:44.801235 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.336116 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:47.346722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:47.346793 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:47.370822 1078428 cri.go:89] found id: ""
	I1210 07:56:47.370860 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.370870 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:47.370876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:47.370948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:47.401111 1078428 cri.go:89] found id: ""
	I1210 07:56:47.401140 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.401149 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:47.401155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:47.401212 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:47.430968 1078428 cri.go:89] found id: ""
	I1210 07:56:47.430991 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.430999 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:47.431004 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:47.431063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:47.455626 1078428 cri.go:89] found id: ""
	I1210 07:56:47.455650 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.455659 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:47.455665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:47.455722 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:47.479857 1078428 cri.go:89] found id: ""
	I1210 07:56:47.479882 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.479890 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:47.479896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:47.479959 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:47.504271 1078428 cri.go:89] found id: ""
	I1210 07:56:47.504294 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.504305 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:47.504312 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:47.504373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:47.532761 1078428 cri.go:89] found id: ""
	I1210 07:56:47.532837 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.532863 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:47.532886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:47.532990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:47.570086 1078428 cri.go:89] found id: ""
	I1210 07:56:47.570108 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.570116 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:47.570125 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:47.570137 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:47.586049 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:47.586078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:47.655434 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:47.655455 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:47.655470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:47.680757 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:47.680794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.708957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:47.708986 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:46.554013 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:49.054042 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:50.265598 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:50.276268 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:50.276342 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:50.301484 1078428 cri.go:89] found id: ""
	I1210 07:56:50.301507 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.301515 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:50.301521 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:50.301582 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:50.327230 1078428 cri.go:89] found id: ""
	I1210 07:56:50.327255 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.327264 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:50.327270 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:50.327331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:50.352201 1078428 cri.go:89] found id: ""
	I1210 07:56:50.352224 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.352233 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:50.352239 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:50.352299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:50.377546 1078428 cri.go:89] found id: ""
	I1210 07:56:50.377571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.377580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:50.377586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:50.377647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:50.403517 1078428 cri.go:89] found id: ""
	I1210 07:56:50.403544 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.403552 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:50.403559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:50.403635 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:50.432794 1078428 cri.go:89] found id: ""
	I1210 07:56:50.432820 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.432829 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:50.432835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:50.432924 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:50.456905 1078428 cri.go:89] found id: ""
	I1210 07:56:50.456931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.456941 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:50.456947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:50.457013 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:50.488840 1078428 cri.go:89] found id: ""
	I1210 07:56:50.488908 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.488932 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:50.488949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:50.488962 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:50.547966 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:50.548000 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:50.565711 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:50.565789 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:50.652776 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:50.652800 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:50.652815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:50.678909 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:50.678950 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.207825 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:53.218403 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:53.218500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:53.244529 1078428 cri.go:89] found id: ""
	I1210 07:56:53.244556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.244565 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:53.244572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:53.244629 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:53.270382 1078428 cri.go:89] found id: ""
	I1210 07:56:53.270408 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.270418 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:53.270424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:53.270517 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:53.295316 1078428 cri.go:89] found id: ""
	I1210 07:56:53.295342 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.295352 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:53.295358 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:53.295425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:53.324326 1078428 cri.go:89] found id: ""
	I1210 07:56:53.324351 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.324360 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:53.324367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:53.324444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:53.349399 1078428 cri.go:89] found id: ""
	I1210 07:56:53.349425 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.349435 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:53.349441 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:53.349555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:53.374280 1078428 cri.go:89] found id: ""
	I1210 07:56:53.374305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.374314 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:53.374321 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:53.374431 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:53.398894 1078428 cri.go:89] found id: ""
	I1210 07:56:53.398920 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.398929 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:53.398935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:53.398992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:53.423872 1078428 cri.go:89] found id: ""
	I1210 07:56:53.423897 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.423907 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:53.423920 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:53.423936 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:53.440226 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:53.440258 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:53.503949 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:53.503975 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:53.503989 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:53.530691 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:53.530737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.577761 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:53.577835 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:51.054085 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:53.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:56.142597 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:56.153164 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:56.153234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:56.177358 1078428 cri.go:89] found id: ""
	I1210 07:56:56.177391 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.177400 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:56.177406 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:56.177475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:56.202573 1078428 cri.go:89] found id: ""
	I1210 07:56:56.202641 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.202657 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:56.202664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:56.202725 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:56.226758 1078428 cri.go:89] found id: ""
	I1210 07:56:56.226785 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.226795 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:56.226802 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:56.226891 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:56.250286 1078428 cri.go:89] found id: ""
	I1210 07:56:56.250310 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.250319 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:56.250327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:56.250381 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:56.276297 1078428 cri.go:89] found id: ""
	I1210 07:56:56.276375 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.276391 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:56.276398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:56.276458 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:56.301334 1078428 cri.go:89] found id: ""
	I1210 07:56:56.301366 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.301375 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:56.301382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:56.301450 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:56.325521 1078428 cri.go:89] found id: ""
	I1210 07:56:56.325557 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.325566 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:56.325572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:56.325640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:56.351180 1078428 cri.go:89] found id: ""
	I1210 07:56:56.351219 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.351228 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:56.351237 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:56.351249 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:56.406556 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:56.406592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:56.422756 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:56.422788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:56.486945 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:56.486967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:56.486983 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:56.512575 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:56.512616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:59.046618 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:59.059092 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:59.059161 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:59.089542 1078428 cri.go:89] found id: ""
	I1210 07:56:59.089571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.089580 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:59.089586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:59.089648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:59.118669 1078428 cri.go:89] found id: ""
	I1210 07:56:59.118691 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.118700 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:59.118706 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:59.118770 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:59.143775 1078428 cri.go:89] found id: ""
	I1210 07:56:59.143802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.143814 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:59.143821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:59.143880 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:59.167972 1078428 cri.go:89] found id: ""
	I1210 07:56:59.167997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.168006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:59.168012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:59.168088 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:59.195291 1078428 cri.go:89] found id: ""
	I1210 07:56:59.195316 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.195325 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:59.195331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:59.195434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:59.219900 1078428 cri.go:89] found id: ""
	I1210 07:56:59.219928 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.219937 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:59.219943 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:59.220002 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:59.252792 1078428 cri.go:89] found id: ""
	I1210 07:56:59.252818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.252827 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:59.252834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:59.252894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:59.281785 1078428 cri.go:89] found id: ""
	I1210 07:56:59.281808 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.281823 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:59.281832 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:59.281843 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:59.337457 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:59.337496 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:59.353622 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:59.353650 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:59.423704 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:59.423725 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:59.423739 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:59.449814 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:59.449853 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:55.554362 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:57.554656 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:59.554765 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:01.979246 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:01.990999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:01.991072 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:02.022990 1078428 cri.go:89] found id: ""
	I1210 07:57:02.023028 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.023038 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:02.023046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:02.023109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:02.050830 1078428 cri.go:89] found id: ""
	I1210 07:57:02.050857 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.050867 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:02.050873 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:02.050930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:02.080878 1078428 cri.go:89] found id: ""
	I1210 07:57:02.080901 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.080909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:02.080915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:02.080974 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:02.111744 1078428 cri.go:89] found id: ""
	I1210 07:57:02.111766 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.111774 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:02.111780 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:02.111838 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:02.139560 1078428 cri.go:89] found id: ""
	I1210 07:57:02.139587 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.139596 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:02.139602 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:02.139662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:02.164249 1078428 cri.go:89] found id: ""
	I1210 07:57:02.164274 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.164282 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:02.164289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:02.164347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:02.191165 1078428 cri.go:89] found id: ""
	I1210 07:57:02.191187 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.191196 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:02.191202 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:02.191280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:02.220305 1078428 cri.go:89] found id: ""
	I1210 07:57:02.220371 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.220395 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:02.220419 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:02.220447 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:02.275451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:02.275490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:02.291722 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:02.291797 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:02.357294 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:02.357319 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:02.357333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:02.382557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:02.382591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:57:02.053955 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:04.553976 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:04.913285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:04.924140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:04.924214 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:04.949752 1078428 cri.go:89] found id: ""
	I1210 07:57:04.949787 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.949796 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:04.949803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:04.949869 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:04.974850 1078428 cri.go:89] found id: ""
	I1210 07:57:04.974876 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.974886 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:04.974892 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:04.974949 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:04.999787 1078428 cri.go:89] found id: ""
	I1210 07:57:04.999853 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.999868 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:04.999875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:04.999937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:05.031544 1078428 cri.go:89] found id: ""
	I1210 07:57:05.031570 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.031580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:05.031586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:05.031644 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:05.068235 1078428 cri.go:89] found id: ""
	I1210 07:57:05.068262 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.068272 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:05.068278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:05.068337 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:05.101435 1078428 cri.go:89] found id: ""
	I1210 07:57:05.101462 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.101472 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:05.101479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:05.101545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:05.129616 1078428 cri.go:89] found id: ""
	I1210 07:57:05.129640 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.129648 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:05.129654 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:05.129733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:05.155520 1078428 cri.go:89] found id: ""
	I1210 07:57:05.155544 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.155553 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:05.155563 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:05.155575 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:05.212400 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:05.212436 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:05.228606 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:05.228643 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:05.292822 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:05.292845 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:05.292858 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:05.318694 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:05.318732 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:07.846610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:07.857861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:07.857939 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:07.885093 1078428 cri.go:89] found id: ""
	I1210 07:57:07.885115 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.885124 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:07.885130 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:07.885192 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:07.909018 1078428 cri.go:89] found id: ""
	I1210 07:57:07.909043 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.909052 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:07.909058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:07.909116 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:07.935262 1078428 cri.go:89] found id: ""
	I1210 07:57:07.935288 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.935298 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:07.935303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:07.935366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:07.959939 1078428 cri.go:89] found id: ""
	I1210 07:57:07.959965 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.959974 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:07.959981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:07.960039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:07.991314 1078428 cri.go:89] found id: ""
	I1210 07:57:07.991341 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.991350 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:07.991356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:07.991415 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:08.020601 1078428 cri.go:89] found id: ""
	I1210 07:57:08.020628 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.020638 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:08.020645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:08.020709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:08.049221 1078428 cri.go:89] found id: ""
	I1210 07:57:08.049250 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.049259 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:08.049265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:08.049323 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:08.078839 1078428 cri.go:89] found id: ""
	I1210 07:57:08.078862 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.078870 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:08.078883 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:08.078896 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:08.098811 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:08.098888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:08.168958 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:08.169024 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:08.169046 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:08.195261 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:08.195297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:08.222093 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:08.222121 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:57:06.554902 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:09.054181 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:10.778721 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:10.791524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:10.791597 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:10.819485 1078428 cri.go:89] found id: ""
	I1210 07:57:10.819507 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.819519 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:10.819525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:10.819585 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:10.872623 1078428 cri.go:89] found id: ""
	I1210 07:57:10.872646 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.872654 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:10.872660 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:10.872724 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:10.898357 1078428 cri.go:89] found id: ""
	I1210 07:57:10.898378 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.898387 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:10.898393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:10.898448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:10.923976 1078428 cri.go:89] found id: ""
	I1210 07:57:10.924000 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.924009 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:10.924016 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:10.924095 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:10.952951 1078428 cri.go:89] found id: ""
	I1210 07:57:10.952986 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.952996 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:10.953002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:10.953069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:10.977761 1078428 cri.go:89] found id: ""
	I1210 07:57:10.977793 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.977802 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:10.977808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:10.977878 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:11.009022 1078428 cri.go:89] found id: ""
	I1210 07:57:11.009052 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.009069 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:11.009076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:11.009147 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:11.034444 1078428 cri.go:89] found id: ""
	I1210 07:57:11.034493 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.034502 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:11.034512 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:11.034523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:11.098059 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:11.098096 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:11.117339 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:11.117370 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:11.190897 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:11.190919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:11.190932 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:11.215685 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:11.215722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:13.744333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:13.754962 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:13.755031 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:13.783588 1078428 cri.go:89] found id: ""
	I1210 07:57:13.783611 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.783619 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:13.783625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:13.783683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:13.819100 1078428 cri.go:89] found id: ""
	I1210 07:57:13.819122 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.819130 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:13.819136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:13.819193 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:13.860234 1078428 cri.go:89] found id: ""
	I1210 07:57:13.860257 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.860266 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:13.860272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:13.860332 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:13.886331 1078428 cri.go:89] found id: ""
	I1210 07:57:13.886406 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.886418 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:13.886424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:13.886540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:13.911054 1078428 cri.go:89] found id: ""
	I1210 07:57:13.911080 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.911089 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:13.911097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:13.911172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:13.934983 1078428 cri.go:89] found id: ""
	I1210 07:57:13.935051 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.935066 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:13.935073 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:13.935131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:13.960415 1078428 cri.go:89] found id: ""
	I1210 07:57:13.960440 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.960449 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:13.960455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:13.960538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:13.985917 1078428 cri.go:89] found id: ""
	I1210 07:57:13.985964 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.985974 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:13.985983 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:13.985995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:14.046091 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:14.046336 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:14.068485 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:14.068513 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:14.145212 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:14.145235 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:14.145248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:14.170375 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:14.170409 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:57:11.553974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:13.554028 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:15.554374 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:17.554945 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:19.054633 1077343 node_ready.go:38] duration metric: took 6m0.001135979s for node "no-preload-587009" to be "Ready" ...
	I1210 07:57:19.057729 1077343 out.go:203] 
	W1210 07:57:19.060573 1077343 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:57:19.060592 1077343 out.go:285] * 
	W1210 07:57:19.062943 1077343 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:57:19.065570 1077343 out.go:203] 
	I1210 07:57:16.699528 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:16.710231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:16.710301 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:16.734299 1078428 cri.go:89] found id: ""
	I1210 07:57:16.734325 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.734333 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:16.734339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:16.734402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:16.759890 1078428 cri.go:89] found id: ""
	I1210 07:57:16.759916 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.759925 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:16.759934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:16.760017 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:16.788155 1078428 cri.go:89] found id: ""
	I1210 07:57:16.788181 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.788191 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:16.788197 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:16.788256 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:16.817801 1078428 cri.go:89] found id: ""
	I1210 07:57:16.817828 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.817837 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:16.817844 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:16.817904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:16.845878 1078428 cri.go:89] found id: ""
	I1210 07:57:16.845905 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.845913 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:16.845919 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:16.845975 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:16.873613 1078428 cri.go:89] found id: ""
	I1210 07:57:16.873641 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.873651 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:16.873658 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:16.873719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:16.898666 1078428 cri.go:89] found id: ""
	I1210 07:57:16.898689 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.898698 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:16.898704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:16.898762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:16.922533 1078428 cri.go:89] found id: ""
	I1210 07:57:16.922560 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.922569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:16.922579 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:16.922591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:16.948298 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:16.948341 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:16.976671 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:16.976699 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:17.033642 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:17.033681 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:17.052529 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:17.052568 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:17.131312 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:19.632225 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:19.644243 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:19.644343 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:19.682502 1078428 cri.go:89] found id: ""
	I1210 07:57:19.682536 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.682546 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:19.682553 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:19.682615 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:19.709431 1078428 cri.go:89] found id: ""
	I1210 07:57:19.709455 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.709464 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:19.709470 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:19.709532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:19.739384 1078428 cri.go:89] found id: ""
	I1210 07:57:19.739426 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.739436 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:19.739442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:19.739502 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:19.767244 1078428 cri.go:89] found id: ""
	I1210 07:57:19.767266 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.767274 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:19.767281 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:19.767338 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:19.802183 1078428 cri.go:89] found id: ""
	I1210 07:57:19.802207 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.802216 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:19.802222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:19.802283 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:19.864351 1078428 cri.go:89] found id: ""
	I1210 07:57:19.864373 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.864381 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:19.864388 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:19.864446 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:19.923313 1078428 cri.go:89] found id: ""
	I1210 07:57:19.923336 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.923344 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:19.923350 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:19.923412 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:19.956689 1078428 cri.go:89] found id: ""
	I1210 07:57:19.956768 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.956792 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:19.956836 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:19.956870 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:20.020110 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:20.020150 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:20.041105 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:20.041136 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:20.171782 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:20.151749   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.157532   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.158302   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161399   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161675   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:20.151749   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.157532   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.158302   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161399   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161675   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:20.171803 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:20.171817 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:20.212388 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:20.212467 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:22.753904 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:22.771857 1078428 out.go:203] 
	W1210 07:57:22.774733 1078428 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:57:22.774767 1078428 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:57:22.774778 1078428 out.go:285] * Related issues:
	W1210 07:57:22.774790 1078428 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:57:22.774803 1078428 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:57:22.777684 1078428 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780066864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780147053Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780256331Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780332672Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780400546Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780472966Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780539559Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780607409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.781584850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.781686825Z" level=info msg="Connect containerd service"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.782018760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.782725912Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792587048Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792681047Z" level=info msg="Start recovering state"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792879967Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792982622Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.827708066Z" level=info msg="Start event monitor"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.827890403Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.827954912Z" level=info msg="Start streaming server"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828030688Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828089839Z" level=info msg="runtime interface starting up..."
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828151371Z" level=info msg="starting plugins..."
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828234219Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:51:20 newest-cni-237317 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.830614962Z" level=info msg="containerd successfully booted in 0.079173s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:34.055841   13749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:34.056757   13749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:34.058823   13749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:34.059125   13749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:34.060613   13749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:57:34 up  6:39,  0 user,  load average: 1.02, 0.73, 1.25
	Linux newest-cni-237317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:57:29 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:30 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 491.
	Dec 10 07:57:30 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:31 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:31 newest-cni-237317 kubelet[13594]: E1210 07:57:31.318569   13594 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:31 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:31 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:32 newest-cni-237317 kubelet[13632]: E1210 07:57:32.101076   13632 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:32 newest-cni-237317 kubelet[13652]: E1210 07:57:32.867010   13652 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:32 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:33 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 10 07:57:33 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:33 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:33 newest-cni-237317 kubelet[13658]: E1210 07:57:33.611189   13658 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:33 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:33 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (332.128691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-237317" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-237317
helpers_test.go:244: (dbg) docker inspect newest-cni-237317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	        "Created": "2025-12-10T07:41:27.764165056Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1078597,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:51:14.851297935Z",
	            "FinishedAt": "2025-12-10T07:51:13.296430701Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/hosts",
	        "LogPath": "/var/lib/docker/containers/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d/a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d-json.log",
	        "Name": "/newest-cni-237317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-237317:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-237317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a3bfe8c2955ad29d7d49c2f88ef46cf59c7c440872f9359180e7d523ce6aec9d",
	                "LowerDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb95a77a82f1fcb16098f073f23236757bf5560cf9fb37f652c127fb3ef2dbb4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-237317",
	                "Source": "/var/lib/docker/volumes/newest-cni-237317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-237317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-237317",
	                "name.minikube.sigs.k8s.io": "newest-cni-237317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ce3a28f31774fef443c63794bb8a81b083cde3dd4d8dbf17e6f4c44906e905a",
	            "SandboxKey": "/var/run/docker/netns/1ce3a28f3177",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-237317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:6f:71:0d:8d:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8181aebce826300f2c9eb8f48208470a68f1816a212863fa9c220fbbaa29953b",
	                    "EndpointID": "c0800f293b750ff5d10633caea6a666c9ca543920cb52ef2db3d40a6e4851b98",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-237317",
	                        "a3bfe8c2955a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (318.036601ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-237317 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-237317 logs -n 25: (1.929590671s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:41 UTC │
	│ image   │ default-k8s-diff-port-444518 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ pause   │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ unpause │ -p default-k8s-diff-port-444518 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p default-k8s-diff-port-444518                                                                                                                                                                                                                            │ default-k8s-diff-port-444518 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ delete  │ -p disable-driver-mounts-262664                                                                                                                                                                                                                            │ disable-driver-mounts-262664 │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │ 10 Dec 25 07:40 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:40 UTC │                     │
	│ image   │ embed-certs-254586 image list --format=json                                                                                                                                                                                                                │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ pause   │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ unpause │ -p embed-certs-254586 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ delete  │ -p embed-certs-254586                                                                                                                                                                                                                                      │ embed-certs-254586           │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │ 10 Dec 25 07:41 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-587009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-237317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:49 UTC │                     │
	│ stop    │ -p no-preload-587009 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ addons  │ enable dashboard -p no-preload-587009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ start   │ -p no-preload-587009 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-587009            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │                     │
	│ stop    │ -p newest-cni-237317 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-237317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │ 10 Dec 25 07:51 UTC │
	│ start   │ -p newest-cni-237317 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:51 UTC │                     │
	│ image   │ newest-cni-237317 image list --format=json                                                                                                                                                                                                                 │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:57 UTC │ 10 Dec 25 07:57 UTC │
	│ pause   │ -p newest-cni-237317 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:57 UTC │ 10 Dec 25 07:57 UTC │
	│ unpause │ -p newest-cni-237317 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-237317            │ jenkins │ v1.37.0 │ 10 Dec 25 07:57 UTC │ 10 Dec 25 07:57 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:51:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:51:14.495415 1078428 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:51:14.495519 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495524 1078428 out.go:374] Setting ErrFile to fd 2...
	I1210 07:51:14.495529 1078428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:51:14.495772 1078428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:51:14.496198 1078428 out.go:368] Setting JSON to false
	I1210 07:51:14.497022 1078428 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23599,"bootTime":1765329476,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:51:14.497081 1078428 start.go:143] virtualization:  
	I1210 07:51:14.500489 1078428 out.go:179] * [newest-cni-237317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:51:14.503586 1078428 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:51:14.503671 1078428 notify.go:221] Checking for updates...
	I1210 07:51:14.509469 1078428 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:51:14.512370 1078428 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:14.515169 1078428 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:51:14.518012 1078428 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:51:14.520797 1078428 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:51:14.527169 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:14.527731 1078428 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:51:14.566042 1078428 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:51:14.566172 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.628663 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.618086592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.628767 1078428 docker.go:319] overlay module found
	I1210 07:51:14.631981 1078428 out.go:179] * Using the docker driver based on existing profile
	I1210 07:51:14.634809 1078428 start.go:309] selected driver: docker
	I1210 07:51:14.634833 1078428 start.go:927] validating driver "docker" against &{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.634946 1078428 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:51:14.635637 1078428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:51:14.728404 1078428 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:51:14.713293715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:51:14.728788 1078428 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:51:14.728810 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:14.728854 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:14.728892 1078428 start.go:353] cluster config:
	{Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:14.732274 1078428 out.go:179] * Starting "newest-cni-237317" primary control-plane node in "newest-cni-237317" cluster
	I1210 07:51:14.735049 1078428 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:51:14.738088 1078428 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:51:14.740969 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:14.741011 1078428 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 07:51:14.741020 1078428 cache.go:65] Caching tarball of preloaded images
	I1210 07:51:14.741100 1078428 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 07:51:14.741110 1078428 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 07:51:14.741232 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:14.741437 1078428 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:51:14.763634 1078428 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:51:14.763653 1078428 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:51:14.763668 1078428 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:51:14.763698 1078428 start.go:360] acquireMachinesLock for newest-cni-237317: {Name:mk865fae7594bb364b28b787041666ed4ecb9dd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:51:14.763755 1078428 start.go:364] duration metric: took 40.304µs to acquireMachinesLock for "newest-cni-237317"
	I1210 07:51:14.763774 1078428 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:51:14.763779 1078428 fix.go:54] fixHost starting: 
	I1210 07:51:14.764055 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:14.807148 1078428 fix.go:112] recreateIfNeeded on newest-cni-237317: state=Stopped err=<nil>
	W1210 07:51:14.807188 1078428 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:51:10.742298 1077343 out.go:252] * Restarting existing docker container for "no-preload-587009" ...
	I1210 07:51:10.742407 1077343 cli_runner.go:164] Run: docker start no-preload-587009
	I1210 07:51:11.039727 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:11.064793 1077343 kic.go:430] container "no-preload-587009" state is running.
	I1210 07:51:11.065794 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:11.090953 1077343 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/config.json ...
	I1210 07:51:11.091180 1077343 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:11.091248 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:11.118540 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:11.118875 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:11.118891 1077343 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:11.119530 1077343 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38028->127.0.0.1:33840: read: connection reset by peer
	I1210 07:51:14.269979 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.270011 1077343 ubuntu.go:182] provisioning hostname "no-preload-587009"
	I1210 07:51:14.270115 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.295536 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.295890 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.295901 1077343 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-587009 && echo "no-preload-587009" | sudo tee /etc/hostname
	I1210 07:51:14.452920 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-587009
	
	I1210 07:51:14.453011 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:14.478828 1077343 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:14.479134 1077343 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33840 <nil> <nil>}
	I1210 07:51:14.479150 1077343 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-587009' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-587009/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-587009' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:14.626210 1077343 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:14.626250 1077343 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:14.626279 1077343 ubuntu.go:190] setting up certificates
	I1210 07:51:14.626296 1077343 provision.go:84] configureAuth start
	I1210 07:51:14.626367 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:14.653396 1077343 provision.go:143] copyHostCerts
	I1210 07:51:14.653479 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:14.653501 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:14.653585 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:14.653695 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:14.653708 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:14.653739 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:14.653813 1077343 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:14.653823 1077343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:14.653849 1077343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:14.653913 1077343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.no-preload-587009 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-587009]
	I1210 07:51:14.987883 1077343 provision.go:177] copyRemoteCerts
	I1210 07:51:14.987956 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:14.988006 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.016190 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.122129 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:15.168648 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:15.209293 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:15.238881 1077343 provision.go:87] duration metric: took 612.568009ms to configureAuth
	I1210 07:51:15.238905 1077343 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:15.239106 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:15.239113 1077343 machine.go:97] duration metric: took 4.147925818s to provisionDockerMachine
	I1210 07:51:15.239121 1077343 start.go:293] postStartSetup for "no-preload-587009" (driver="docker")
	I1210 07:51:15.239133 1077343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:15.239186 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:15.239227 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.259116 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.370554 1077343 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:15.375386 1077343 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:15.375413 1077343 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:15.375424 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:15.375477 1077343 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:15.375560 1077343 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:15.375669 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:15.386817 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:15.415888 1077343 start.go:296] duration metric: took 176.733864ms for postStartSetup
	I1210 07:51:15.416018 1077343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:15.416065 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.439058 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.548495 1077343 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:15.553596 1077343 fix.go:56] duration metric: took 4.831668845s for fixHost
	I1210 07:51:15.553633 1077343 start.go:83] releasing machines lock for "no-preload-587009", held for 4.831730515s
	I1210 07:51:15.553722 1077343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-587009
	I1210 07:51:15.586973 1077343 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:15.587034 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.587329 1077343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:15.587396 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:15.629146 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.634697 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:15.746290 1077343 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:15.838801 1077343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:15.843040 1077343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:15.843111 1077343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:15.851174 1077343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:15.851245 1077343 start.go:496] detecting cgroup driver to use...
	I1210 07:51:15.851294 1077343 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:15.851351 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:15.869860 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:15.883702 1077343 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:15.883777 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:15.899664 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:15.913011 1077343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:16.034801 1077343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:16.150617 1077343 docker.go:234] disabling docker service ...
	I1210 07:51:16.150759 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:16.165840 1077343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:16.180309 1077343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:16.307789 1077343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:16.432072 1077343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:16.444962 1077343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:16.459040 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:16.467874 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:16.476775 1077343 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:16.476842 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:16.485489 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.494113 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:16.502936 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:16.511763 1077343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:16.519893 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:16.528779 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:16.537342 1077343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:16.546138 1077343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:16.553912 1077343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:16.561714 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:16.748597 1077343 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:16.865266 1077343 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:16.865408 1077343 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:16.869450 1077343 start.go:564] Will wait 60s for crictl version
	I1210 07:51:16.869562 1077343 ssh_runner.go:195] Run: which crictl
	I1210 07:51:16.873018 1077343 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:16.900099 1077343 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:16.900218 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.923700 1077343 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:16.947379 1077343 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:16.950227 1077343 cli_runner.go:164] Run: docker network inspect no-preload-587009 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:16.965229 1077343 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:16.969175 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:16.978619 1077343 kubeadm.go:884] updating cluster {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:16.978743 1077343 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:16.978798 1077343 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:17.014301 1077343 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:17.014333 1077343 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:17.014341 1077343 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:17.014532 1077343 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-587009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:17.014625 1077343 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:17.044039 1077343 cni.go:84] Creating CNI manager for ""
	I1210 07:51:17.044060 1077343 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:17.044082 1077343 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:51:17.044104 1077343 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-587009 NodeName:no-preload-587009 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:17.044222 1077343 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-587009"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:17.044289 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:17.052024 1077343 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:17.052101 1077343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:17.059722 1077343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:17.072494 1077343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:17.086253 1077343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1210 07:51:17.099376 1077343 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:17.102883 1077343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:17.112330 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:17.225530 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:17.246996 1077343 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009 for IP: 192.168.85.2
	I1210 07:51:17.247021 1077343 certs.go:195] generating shared ca certs ...
	I1210 07:51:17.247038 1077343 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.247186 1077343 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:17.247238 1077343 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:17.247248 1077343 certs.go:257] generating profile certs ...
	I1210 07:51:17.247347 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/client.key
	I1210 07:51:17.247407 1077343 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key.841ee17a
	I1210 07:51:17.247454 1077343 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key
	I1210 07:51:17.247566 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:17.247604 1077343 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:17.247617 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:17.247646 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:17.247674 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:17.247712 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:17.247768 1077343 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:17.248384 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:17.265969 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:17.284190 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:17.302881 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:17.324073 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:17.341990 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:51:17.359614 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:17.377843 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/no-preload-587009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:51:17.395426 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:17.413039 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:17.430522 1077343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:17.447821 1077343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:17.460777 1077343 ssh_runner.go:195] Run: openssl version
	I1210 07:51:17.467243 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.474706 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:17.482273 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.485950 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.486025 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:17.526902 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:17.534224 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.541448 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:17.549037 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552765 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.552832 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:17.595755 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:17.603128 1077343 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.610926 1077343 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:17.618981 1077343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622497 1077343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.622563 1077343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:17.663609 1077343 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:17.670957 1077343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:17.674676 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:17.715746 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:17.758195 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:17.799081 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:17.840047 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:17.880964 1077343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:17.921878 1077343 kubeadm.go:401] StartCluster: {Name:no-preload-587009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-587009 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:17.921988 1077343 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:17.922092 1077343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:17.951649 1077343 cri.go:89] found id: ""
	I1210 07:51:17.951796 1077343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:17.959534 1077343 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:17.959555 1077343 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:17.959635 1077343 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:17.966920 1077343 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:17.967331 1077343 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-587009" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.967425 1077343 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-587009" cluster setting kubeconfig missing "no-preload-587009" context setting]
	I1210 07:51:17.967687 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.968903 1077343 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:17.977669 1077343 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:51:17.977707 1077343 kubeadm.go:602] duration metric: took 18.146766ms to restartPrimaryControlPlane
	I1210 07:51:17.977718 1077343 kubeadm.go:403] duration metric: took 55.849318ms to StartCluster
	I1210 07:51:17.977733 1077343 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.977796 1077343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:17.978427 1077343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:17.978652 1077343 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:17.978958 1077343 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:17.979006 1077343 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:17.979072 1077343 addons.go:70] Setting storage-provisioner=true in profile "no-preload-587009"
	I1210 07:51:17.979085 1077343 addons.go:239] Setting addon storage-provisioner=true in "no-preload-587009"
	I1210 07:51:17.979106 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979123 1077343 addons.go:70] Setting dashboard=true in profile "no-preload-587009"
	I1210 07:51:17.979139 1077343 addons.go:239] Setting addon dashboard=true in "no-preload-587009"
	W1210 07:51:17.979155 1077343 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:17.979179 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:17.979564 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.979606 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.982091 1077343 addons.go:70] Setting default-storageclass=true in profile "no-preload-587009"
	I1210 07:51:17.982247 1077343 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-587009"
	I1210 07:51:17.983173 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:17.984528 1077343 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:17.987357 1077343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:18.030694 1077343 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:18.030828 1077343 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:18.034622 1077343 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:14.810511 1078428 out.go:252] * Restarting existing docker container for "newest-cni-237317" ...
	I1210 07:51:14.810602 1078428 cli_runner.go:164] Run: docker start newest-cni-237317
	I1210 07:51:15.140257 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:15.163514 1078428 kic.go:430] container "newest-cni-237317" state is running.
	I1210 07:51:15.165120 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:15.200178 1078428 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/config.json ...
	I1210 07:51:15.200425 1078428 machine.go:94] provisionDockerMachine start ...
	I1210 07:51:15.200484 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:15.234652 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:15.234972 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:15.234980 1078428 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:51:15.238112 1078428 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:51:18.394621 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.394726 1078428 ubuntu.go:182] provisioning hostname "newest-cni-237317"
	I1210 07:51:18.394818 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.424081 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.424400 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.424411 1078428 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-237317 && echo "newest-cni-237317" | sudo tee /etc/hostname
	I1210 07:51:18.589360 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-237317
	
	I1210 07:51:18.589454 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:18.613196 1078428 main.go:143] libmachine: Using SSH client type: native
	I1210 07:51:18.613511 1078428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1210 07:51:18.613536 1078428 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-237317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-237317/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-237317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:51:18.750663 1078428 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:51:18.750693 1078428 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 07:51:18.750726 1078428 ubuntu.go:190] setting up certificates
	I1210 07:51:18.750745 1078428 provision.go:84] configureAuth start
	I1210 07:51:18.750808 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:18.768151 1078428 provision.go:143] copyHostCerts
	I1210 07:51:18.768234 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 07:51:18.768250 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 07:51:18.768328 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 07:51:18.768450 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 07:51:18.768462 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 07:51:18.768492 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 07:51:18.768566 1078428 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 07:51:18.768583 1078428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 07:51:18.768617 1078428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 07:51:18.768682 1078428 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.newest-cni-237317 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-237317]
	I1210 07:51:19.084729 1078428 provision.go:177] copyRemoteCerts
	I1210 07:51:19.084804 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:51:19.084849 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.104109 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.203019 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:51:19.223435 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:51:19.240802 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:51:19.257611 1078428 provision.go:87] duration metric: took 506.840522ms to configureAuth
	I1210 07:51:19.257643 1078428 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:51:19.257850 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:19.257864 1078428 machine.go:97] duration metric: took 4.057430572s to provisionDockerMachine
	I1210 07:51:19.257873 1078428 start.go:293] postStartSetup for "newest-cni-237317" (driver="docker")
	I1210 07:51:19.257887 1078428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:51:19.257947 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:51:19.257992 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.274867 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.371336 1078428 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:51:19.375463 1078428 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:51:19.375497 1078428 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:51:19.375509 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 07:51:19.375559 1078428 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 07:51:19.375641 1078428 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 07:51:19.375745 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:51:19.386080 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:19.406230 1078428 start.go:296] duration metric: took 148.339109ms for postStartSetup
	I1210 07:51:19.406314 1078428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:51:19.406379 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.424523 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:18.034780 1077343 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.034793 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:18.034874 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.037543 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:18.037568 1077343 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:18.037639 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.041604 1077343 addons.go:239] Setting addon default-storageclass=true in "no-preload-587009"
	I1210 07:51:18.041645 1077343 host.go:66] Checking if "no-preload-587009" exists ...
	I1210 07:51:18.042060 1077343 cli_runner.go:164] Run: docker container inspect no-preload-587009 --format={{.State.Status}}
	I1210 07:51:18.105147 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.114730 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.115497 1077343 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.115511 1077343 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:18.115563 1077343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-587009
	I1210 07:51:18.135449 1077343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33840 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/no-preload-587009/id_rsa Username:docker}
	I1210 07:51:18.230094 1077343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:18.264441 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:18.283658 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:18.283729 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:18.329062 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:18.329133 1077343 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:18.353549 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:18.353629 1077343 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:18.357622 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:18.376127 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:18.376202 1077343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:18.447999 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:18.448021 1077343 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:18.470186 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:18.470208 1077343 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:18.489233 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:18.489255 1077343 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:18.503805 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:18.503828 1077343 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:18.521545 1077343 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:18.521566 1077343 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:18.536611 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.053453 1077343 node_ready.go:35] waiting up to 6m0s for node "no-preload-587009" to be "Ready" ...
	W1210 07:51:19.053800 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053834 1077343 retry.go:31] will retry after 261.467752ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.053883 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.053894 1077343 retry.go:31] will retry after 368.94912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.054089 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.054104 1077343 retry.go:31] will retry after 338.426434ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.315446 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.382015 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.382044 1077343 retry.go:31] will retry after 337.060159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.393358 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:19.424101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:19.491743 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.491780 1077343 retry.go:31] will retry after 471.881278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:19.538786 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.538838 1077343 retry.go:31] will retry after 528.879721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.719721 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:19.790713 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.790742 1077343 retry.go:31] will retry after 510.29035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.964160 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:20.068233 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:20.070746 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.070792 1077343 retry.go:31] will retry after 543.265245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.148457 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.148492 1077343 retry.go:31] will retry after 460.630823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.301882 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:20.397427 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.397476 1077343 retry.go:31] will retry after 801.303312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:19.524843 1078428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:51:19.530920 1078428 fix.go:56] duration metric: took 4.767134196s for fixHost
	I1210 07:51:19.530943 1078428 start.go:83] releasing machines lock for "newest-cni-237317", held for 4.767180038s
	I1210 07:51:19.531010 1078428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-237317
	I1210 07:51:19.550838 1078428 ssh_runner.go:195] Run: cat /version.json
	I1210 07:51:19.550877 1078428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:51:19.550890 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.550934 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:19.570871 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.573219 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:19.666233 1078428 ssh_runner.go:195] Run: systemctl --version
	I1210 07:51:19.757488 1078428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:51:19.762554 1078428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:51:19.762646 1078428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:51:19.772614 1078428 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:51:19.772688 1078428 start.go:496] detecting cgroup driver to use...
	I1210 07:51:19.772735 1078428 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:51:19.772810 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:51:19.790830 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:51:19.808563 1078428 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:51:19.808685 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:51:19.825219 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:51:19.839550 1078428 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:51:19.957848 1078428 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:51:20.106011 1078428 docker.go:234] disabling docker service ...
	I1210 07:51:20.106089 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:51:20.124597 1078428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:51:20.139030 1078428 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:51:20.264730 1078428 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:51:20.405057 1078428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:51:20.418041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:51:20.434060 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:51:20.443707 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:51:20.453162 1078428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:51:20.453287 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:51:20.462485 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.471477 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:51:20.480685 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:51:20.489771 1078428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:51:20.498259 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:51:20.507883 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:51:20.516803 1078428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:51:20.525782 1078428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:51:20.533254 1078428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:51:20.540718 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:20.693669 1078428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:51:20.831153 1078428 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:51:20.831249 1078428 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:51:20.835049 1078428 start.go:564] Will wait 60s for crictl version
	I1210 07:51:20.835127 1078428 ssh_runner.go:195] Run: which crictl
	I1210 07:51:20.838628 1078428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:51:20.863125 1078428 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:51:20.863217 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.884709 1078428 ssh_runner.go:195] Run: containerd --version
	I1210 07:51:20.910533 1078428 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1210 07:51:20.913646 1078428 cli_runner.go:164] Run: docker network inspect newest-cni-237317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:51:20.930416 1078428 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:51:20.934716 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:20.948181 1078428 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:51:20.951046 1078428 kubeadm.go:884] updating cluster {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:51:20.951211 1078428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 07:51:20.951303 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:20.976663 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:20.976691 1078428 containerd.go:534] Images already preloaded, skipping extraction
	I1210 07:51:20.976756 1078428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:51:21.000721 1078428 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:51:21.000745 1078428 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:51:21.000753 1078428 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1210 07:51:21.000851 1078428 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-237317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:51:21.000919 1078428 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:51:21.027129 1078428 cni.go:84] Creating CNI manager for ""
	I1210 07:51:21.027160 1078428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:51:21.027182 1078428 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:51:21.027206 1078428 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-237317 NodeName:newest-cni-237317 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:51:21.027326 1078428 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-237317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:51:21.027402 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:51:21.035339 1078428 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:51:21.035477 1078428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:51:21.043040 1078428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:51:21.056144 1078428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:51:21.068486 1078428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:51:21.080830 1078428 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:51:21.084334 1078428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:51:21.093747 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:21.227754 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:21.255098 1078428 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317 for IP: 192.168.76.2
	I1210 07:51:21.255120 1078428 certs.go:195] generating shared ca certs ...
	I1210 07:51:21.255146 1078428 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:21.255299 1078428 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 07:51:21.255358 1078428 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 07:51:21.255372 1078428 certs.go:257] generating profile certs ...
	I1210 07:51:21.255486 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/client.key
	I1210 07:51:21.255553 1078428 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key.8d76a14f
	I1210 07:51:21.255599 1078428 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key
	I1210 07:51:21.255719 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 07:51:21.255759 1078428 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 07:51:21.255770 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:51:21.255801 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:51:21.255838 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:51:21.255870 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 07:51:21.255919 1078428 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 07:51:21.256545 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:51:21.311093 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:51:21.352581 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:51:21.373410 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:51:21.394506 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:51:21.429692 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:51:21.462387 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:51:21.492668 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/newest-cni-237317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:51:21.520168 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 07:51:21.538625 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:51:21.556477 1078428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 07:51:21.574823 1078428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:51:21.587970 1078428 ssh_runner.go:195] Run: openssl version
	I1210 07:51:21.594082 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.601606 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:51:21.609233 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613206 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.613303 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:51:21.655122 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:51:21.662415 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.669633 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 07:51:21.677051 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680913 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.680973 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 07:51:21.722892 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:51:21.730172 1078428 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.737341 1078428 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 07:51:21.744828 1078428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748681 1078428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.748767 1078428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 07:51:21.790554 1078428 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:51:21.797952 1078428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:51:21.801618 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:51:21.842558 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:51:21.883251 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:51:21.924099 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:51:21.965360 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:51:22.007244 1078428 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:51:22.049094 1078428 kubeadm.go:401] StartCluster: {Name:newest-cni-237317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-237317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:51:22.049233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:51:22.049334 1078428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:51:22.093879 1078428 cri.go:89] found id: ""
	I1210 07:51:22.094034 1078428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:51:22.108858 1078428 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:51:22.108920 1078428 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:51:22.109002 1078428 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:51:22.119866 1078428 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:51:22.120478 1078428 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-237317" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.120794 1078428 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-784887/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-237317" cluster setting kubeconfig missing "newest-cni-237317" context setting]
	I1210 07:51:22.121355 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.123034 1078428 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:51:22.139211 1078428 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:51:22.139284 1078428 kubeadm.go:602] duration metric: took 30.344057ms to restartPrimaryControlPlane
	I1210 07:51:22.139309 1078428 kubeadm.go:403] duration metric: took 90.22699ms to StartCluster
	I1210 07:51:22.139351 1078428 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.139430 1078428 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:51:22.140615 1078428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:51:22.141197 1078428 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:51:22.141378 1078428 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:51:22.149299 1078428 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-237317"
	I1210 07:51:22.149322 1078428 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-237317"
	I1210 07:51:22.149353 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.149966 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.141985 1078428 config.go:182] Loaded profile config "newest-cni-237317": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:51:22.150417 1078428 addons.go:70] Setting dashboard=true in profile "newest-cni-237317"
	I1210 07:51:22.150441 1078428 addons.go:239] Setting addon dashboard=true in "newest-cni-237317"
	W1210 07:51:22.150449 1078428 addons.go:248] addon dashboard should already be in state true
	I1210 07:51:22.150502 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.151022 1078428 addons.go:70] Setting default-storageclass=true in profile "newest-cni-237317"
	I1210 07:51:22.151064 1078428 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-237317"
	I1210 07:51:22.151139 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.151406 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.154353 1078428 out.go:179] * Verifying Kubernetes components...
	I1210 07:51:22.159801 1078428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:51:22.209413 1078428 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:51:22.216779 1078428 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.216810 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:51:22.216899 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.223328 1078428 addons.go:239] Setting addon default-storageclass=true in "newest-cni-237317"
	I1210 07:51:22.223372 1078428 host.go:66] Checking if "newest-cni-237317" exists ...
	I1210 07:51:22.223787 1078428 cli_runner.go:164] Run: docker container inspect newest-cni-237317 --format={{.State.Status}}
	I1210 07:51:22.224255 1078428 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:51:22.227259 1078428 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:51:22.230643 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:51:22.230670 1078428 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:51:22.230738 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.262205 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.304886 1078428 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:22.304913 1078428 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:51:22.305020 1078428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-237317
	I1210 07:51:22.320571 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.350629 1078428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/newest-cni-237317/id_rsa Username:docker}
	I1210 07:51:22.414331 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:22.428355 1078428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:51:22.476480 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:51:22.476506 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:51:22.499604 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.511381 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.511434 1078428 retry.go:31] will retry after 354.449722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.512377 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:51:22.512398 1078428 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:51:22.525695 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:51:22.525721 1078428 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:51:22.549890 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:51:22.549921 1078428 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:51:22.571318 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:51:22.571360 1078428 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:51:22.590078 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:51:22.590107 1078428 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:51:22.605317 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:51:22.605341 1078428 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:51:22.618168 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:51:22.618200 1078428 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:51:22.632058 1078428 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.632138 1078428 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:51:22.645108 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:22.866802 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:23.047272 1078428 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:51:23.047355 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:23.047482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047505 1078428 retry.go:31] will retry after 239.047353ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047709 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047727 1078428 retry.go:31] will retry after 188.716917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.047786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.047796 1078428 retry.go:31] will retry after 517.712293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.237633 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:23.287256 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.302152 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.302252 1078428 retry.go:31] will retry after 469.586518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.346821 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.346867 1078428 retry.go:31] will retry after 517.463027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.548102 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.566734 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:23.638131 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.638161 1078428 retry.go:31] will retry after 398.122111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.772509 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.859471 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.859510 1078428 retry.go:31] will retry after 826.751645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.865483 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:23.933950 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.933981 1078428 retry.go:31] will retry after 776.320293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.037254 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:24.047892 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:24.103304 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.103348 1078428 retry.go:31] will retry after 781.805737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.609734 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:20.615162 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:20.763154 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763202 1077343 retry.go:31] will retry after 629.698549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:20.763322 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:20.763340 1077343 retry.go:31] will retry after 624.408887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.054168 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:21.199599 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:21.288128 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.288156 1077343 retry.go:31] will retry after 1.429543278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.388486 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:21.393905 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:21.513396 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.513426 1077343 retry.go:31] will retry after 1.363983036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:21.522339 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:21.522370 1077343 retry.go:31] will retry after 1.881789089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.718226 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:22.784732 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.784765 1077343 retry.go:31] will retry after 2.14784628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.877998 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:22.948118 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:22.948146 1077343 retry.go:31] will retry after 2.832610868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:23.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:23.404396 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:23.467879 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:23.467914 1077343 retry.go:31] will retry after 2.135960827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.933362 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.999854 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.999895 1077343 retry.go:31] will retry after 3.6382738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.548307 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.687434 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:24.711319 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:24.773539 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.773577 1078428 retry.go:31] will retry after 997.771985ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:24.790786 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.790863 1078428 retry.go:31] will retry after 982.839582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.886098 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:24.963470 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:24.963508 1078428 retry.go:31] will retry after 1.65409552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.047816 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.547590 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.771778 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:51:25.774151 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.936732 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.936801 1078428 retry.go:31] will retry after 1.015181303s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:25.947734 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.947767 1078428 retry.go:31] will retry after 1.482437442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.048146 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.547461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.617808 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:26.678401 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.678435 1078428 retry.go:31] will retry after 1.557494695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:26.952842 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.019482 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.019568 1078428 retry.go:31] will retry after 1.273355747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.047573 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.431325 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:27.498014 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.498046 1078428 retry.go:31] will retry after 1.046464225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.548153 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.236708 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:28.293309 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:28.313086 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.313117 1078428 retry.go:31] will retry after 2.925748723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.376082 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.376136 1078428 retry.go:31] will retry after 3.458373128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.545585 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:28.548098 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:28.611335 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.611369 1078428 retry.go:31] will retry after 3.856495335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.047665 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:25.554994 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:25.604337 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:25.669224 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.669262 1077343 retry.go:31] will retry after 2.194006804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.781321 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:25.929708 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:25.929740 1077343 retry.go:31] will retry after 3.276039002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.863966 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:27.927673 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:27.927709 1077343 retry.go:31] will retry after 5.303571514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:28.054575 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:28.639292 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:28.698653 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:28.698686 1077343 retry.go:31] will retry after 3.005783671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.206806 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:29.264930 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.264960 1077343 retry.go:31] will retry after 2.489245949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:29.547947 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.047725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.548382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.048336 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.239688 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:31.305382 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.305411 1078428 retry.go:31] will retry after 5.48588333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.547900 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.835667 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:31.907250 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.907288 1078428 retry.go:31] will retry after 3.413940388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.047433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.468741 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:32.529582 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.529616 1078428 retry.go:31] will retry after 2.765741211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:32.547808 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.048388 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.547638 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.048299 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:30.554528 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:31.705403 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:51:31.754983 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:31.764053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.764088 1077343 retry.go:31] will retry after 6.263299309s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:31.824900 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:31.824937 1077343 retry.go:31] will retry after 8.063912103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:32.554572 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:33.232049 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:33.291801 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:33.291838 1077343 retry.go:31] will retry after 5.361341065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:34.554757 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:34.547845 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.048329 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.295932 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:35.322379 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:35.361522 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.361555 1078428 retry.go:31] will retry after 3.648316362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:35.394430 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.394485 1078428 retry.go:31] will retry after 5.549499405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:35.547462 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.048235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.547640 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.792053 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:36.857078 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:36.857110 1078428 retry.go:31] will retry after 8.697501731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:37.048326 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.548396 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.047529 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.547464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.010651 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:51:39.048217 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:39.071638 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.071669 1078428 retry.go:31] will retry after 13.355816146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:37.053891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:38.027881 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:38.116733 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.116768 1077343 retry.go:31] will retry after 12.105620641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.653613 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:38.715053 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:38.715087 1077343 retry.go:31] will retry after 11.375750542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:39.554885 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:39.889521 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:39.947993 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.948032 1077343 retry.go:31] will retry after 6.34767532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:39.547555 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.048271 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.548333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:40.944176 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:41.005827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.005869 1078428 retry.go:31] will retry after 6.58383212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:41.047819 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:41.547642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.048470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.547646 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.047482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:43.548313 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:44.048345 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:42.054758 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:44.554149 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:44.547780 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.048251 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.547682 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.555791 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:45.648631 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:45.648667 1078428 retry.go:31] will retry after 11.694093059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.048267 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:46.547745 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.047711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.547488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:47.590140 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:47.657175 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:47.657216 1078428 retry.go:31] will retry after 17.707179987s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:48.047554 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.547523 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:49.048229 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:46.296554 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:46.375385 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:46.375418 1077343 retry.go:31] will retry after 17.860418691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:51:47.054540 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:49.054867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:50.091584 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:51:50.153219 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.153253 1077343 retry.go:31] will retry after 15.008999648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.223406 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:50.279259 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:50.279296 1077343 retry.go:31] will retry after 9.416080018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:49.547855 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.048310 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:50.547470 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.048482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.547803 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.048220 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:52.428493 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:51:52.490932 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.490967 1078428 retry.go:31] will retry after 16.825164958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:52.548145 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.047509 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:53.548344 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.047578 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:51.553954 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:54.547773 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.047551 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:55.547690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.047804 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:56.547512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.048500 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.343638 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:57.401827 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.401862 1078428 retry.go:31] will retry after 12.086669618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:57.548118 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.047490 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:58.547566 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:59.047512 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:51:56.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:51:58.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:51:59.696250 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:51:59.757338 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:59.757373 1077343 retry.go:31] will retry after 26.778697297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:51:59.547820 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.048277 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.547702 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.047690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:01.548160 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.047532 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:02.547658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.048174 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.547494 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:04.047488 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:01.054130 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:03.554867 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:04.236888 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:04.303052 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:04.303083 1077343 retry.go:31] will retry after 25.859676141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.163286 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.227326 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.227361 1077343 retry.go:31] will retry after 29.528693098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:04.547752 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.047684 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:05.364684 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:05.426426 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.426483 1078428 retry.go:31] will retry after 20.310563443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:05.547649 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.547647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.048386 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:07.548191 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.047499 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:08.547510 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.047557 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.316912 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:09.386785 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.386818 1078428 retry.go:31] will retry after 17.689212788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.489070 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:06.053981 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:08.554858 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:09.547482 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:09.552880 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:09.552917 1078428 retry.go:31] will retry after 27.483688335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:10.047697 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:10.548124 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.047626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:11.548296 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.048335 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.548247 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.047495 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:13.547530 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:14.047549 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:11.053980 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:13.054863 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:15.055109 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:14.547736 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.047574 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.548227 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.047516 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:16.548114 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.047567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:17.547679 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.048185 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.548203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:19.047660 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:52:17.055513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:19.553887 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:19.547978 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.048384 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:20.548389 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.048134 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.547434 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.048274 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:22.547540 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:22.547641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:22.572419 1078428 cri.go:89] found id: ""
	I1210 07:52:22.572446 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.572457 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:22.572464 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:22.572530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:22.596895 1078428 cri.go:89] found id: ""
	I1210 07:52:22.596923 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.596931 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:22.596938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:22.597000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:22.621678 1078428 cri.go:89] found id: ""
	I1210 07:52:22.621705 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.621713 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:22.621720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:22.621783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:22.646160 1078428 cri.go:89] found id: ""
	I1210 07:52:22.646188 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.646198 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:22.646205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:22.646270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:22.671641 1078428 cri.go:89] found id: ""
	I1210 07:52:22.671670 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.671680 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:22.671686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:22.671750 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:22.697149 1078428 cri.go:89] found id: ""
	I1210 07:52:22.697177 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.697187 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:22.697194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:22.697255 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:22.722276 1078428 cri.go:89] found id: ""
	I1210 07:52:22.722300 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.722318 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:22.722324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:22.722388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:22.751396 1078428 cri.go:89] found id: ""
	I1210 07:52:22.751422 1078428 logs.go:282] 0 containers: []
	W1210 07:52:22.751431 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:22.751440 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:22.751452 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:22.806571 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:22.806611 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:22.824584 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:22.824623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:22.902683 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:22.894538    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.894950    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.896552    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.897020    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:22.898547    1846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:22.902704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:22.902719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:22.928289 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:22.928326 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:52:21.554922 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:24.054424 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:25.461464 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:25.472201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:25.472303 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:25.498226 1078428 cri.go:89] found id: ""
	I1210 07:52:25.498253 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.498263 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:25.498269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:25.498331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:25.524731 1078428 cri.go:89] found id: ""
	I1210 07:52:25.524759 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.524777 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:25.524789 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:25.524855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:25.554155 1078428 cri.go:89] found id: ""
	I1210 07:52:25.554178 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.554187 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:25.554194 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:25.554252 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:25.580553 1078428 cri.go:89] found id: ""
	I1210 07:52:25.580584 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.580593 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:25.580599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:25.580669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:25.606241 1078428 cri.go:89] found id: ""
	I1210 07:52:25.606309 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.606341 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:25.606369 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:25.606449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:25.630882 1078428 cri.go:89] found id: ""
	I1210 07:52:25.630912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.630921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:25.630928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:25.631028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:25.657178 1078428 cri.go:89] found id: ""
	I1210 07:52:25.657207 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.657215 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:25.657221 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:25.657282 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:25.686580 1078428 cri.go:89] found id: ""
	I1210 07:52:25.686604 1078428 logs.go:282] 0 containers: []
	W1210 07:52:25.686612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:25.686622 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:25.686634 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:25.737209 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:52:25.742985 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:25.743060 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:52:25.816909 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.817156 1078428 retry.go:31] will retry after 25.212576039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:25.818420 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:25.818454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:25.889855 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:25.881344    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.882021    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.883760    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.884373    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:25.885999    1967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:25.889919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:25.889939 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:25.915022 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:25.915058 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:27.076870 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:27.134892 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:27.134924 1078428 retry.go:31] will retry after 48.20102621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:28.443268 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:28.454097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:28.454172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:28.482759 1078428 cri.go:89] found id: ""
	I1210 07:52:28.482789 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.482798 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:28.482805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:28.482868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:28.507737 1078428 cri.go:89] found id: ""
	I1210 07:52:28.507760 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.507769 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:28.507775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:28.507836 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:28.532881 1078428 cri.go:89] found id: ""
	I1210 07:52:28.532907 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.532916 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:28.532923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:28.532989 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:28.562425 1078428 cri.go:89] found id: ""
	I1210 07:52:28.562451 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.562460 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:28.562489 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:28.562551 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:28.587926 1078428 cri.go:89] found id: ""
	I1210 07:52:28.587952 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.587961 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:28.587967 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:28.588026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:28.613523 1078428 cri.go:89] found id: ""
	I1210 07:52:28.613593 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.613617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:28.613638 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:28.613730 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:28.637796 1078428 cri.go:89] found id: ""
	I1210 07:52:28.637864 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.637888 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:28.637907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:28.637993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:28.666907 1078428 cri.go:89] found id: ""
	I1210 07:52:28.666937 1078428 logs.go:282] 0 containers: []
	W1210 07:52:28.666946 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:28.666956 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:28.666968 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:28.722569 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:28.722604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:28.738517 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:28.738592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:28.814307 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:28.798713    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.799649    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.803563    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.804492    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:28.807397    2087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:28.814366 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:28.814395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:28.842824 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:28.842905 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:26.536333 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:26.554155 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:26.621759 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:26.621788 1077343 retry.go:31] will retry after 32.881374862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:29.054917 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:30.163626 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:30.226039 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:30.226073 1077343 retry.go:31] will retry after 27.175178767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:31.380548 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:31.391083 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:31.391159 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:31.416470 1078428 cri.go:89] found id: ""
	I1210 07:52:31.416496 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.416504 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:31.416510 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:31.416570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:31.441740 1078428 cri.go:89] found id: ""
	I1210 07:52:31.441767 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.441776 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:31.441782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:31.441843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:31.465834 1078428 cri.go:89] found id: ""
	I1210 07:52:31.465860 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.465869 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:31.465875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:31.465935 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:31.492061 1078428 cri.go:89] found id: ""
	I1210 07:52:31.492085 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.492093 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:31.492099 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:31.492177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:31.515891 1078428 cri.go:89] found id: ""
	I1210 07:52:31.515971 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.515993 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:31.516010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:31.516096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:31.540039 1078428 cri.go:89] found id: ""
	I1210 07:52:31.540061 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.540069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:31.540076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:31.540169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:31.565345 1078428 cri.go:89] found id: ""
	I1210 07:52:31.565372 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.565388 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:31.565395 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:31.565513 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:31.590011 1078428 cri.go:89] found id: ""
	I1210 07:52:31.590035 1078428 logs.go:282] 0 containers: []
	W1210 07:52:31.590044 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:31.590074 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:31.590089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:31.656796 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:31.649034    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.649484    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.650940    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.651317    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:31.652719    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:31.656816 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:31.656828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:31.681821 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:31.681855 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:31.709786 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:31.709815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:31.764688 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:31.764728 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.283681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:34.296241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:34.296314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:34.337179 1078428 cri.go:89] found id: ""
	I1210 07:52:34.337201 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.337210 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:34.337216 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:34.337274 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:34.369583 1078428 cri.go:89] found id: ""
	I1210 07:52:34.369611 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.369619 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:34.369625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:34.369683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:34.395566 1078428 cri.go:89] found id: ""
	I1210 07:52:34.395591 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.395600 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:34.395606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:34.395688 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:34.419610 1078428 cri.go:89] found id: ""
	I1210 07:52:34.419677 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.419702 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:34.419718 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:34.419797 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:34.444441 1078428 cri.go:89] found id: ""
	I1210 07:52:34.444511 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.444535 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:34.444550 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:34.444627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:34.469517 1078428 cri.go:89] found id: ""
	I1210 07:52:34.469540 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.469549 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:34.469556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:34.469618 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:34.494093 1078428 cri.go:89] found id: ""
	I1210 07:52:34.494120 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.494129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:34.494136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:34.494196 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	W1210 07:52:31.554771 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:34.054729 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:34.756990 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:34.831836 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:34.831956 1077343 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:34.518575 1078428 cri.go:89] found id: ""
	I1210 07:52:34.518658 1078428 logs.go:282] 0 containers: []
	W1210 07:52:34.518674 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:34.518685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:34.518698 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:34.534743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:34.534770 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:34.597542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:34.589981    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.590406    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.591861    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.592187    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:34.593602    2318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:34.597564 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:34.597577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:34.622841 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:34.622876 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:34.653362 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:34.653395 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.036872 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:37.117418 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.117451 1078428 retry.go:31] will retry after 42.271832156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:52:37.209642 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:37.220263 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:37.220360 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:37.244517 1078428 cri.go:89] found id: ""
	I1210 07:52:37.244544 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.244552 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:37.244558 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:37.244619 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:37.269073 1078428 cri.go:89] found id: ""
	I1210 07:52:37.269099 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.269108 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:37.269114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:37.269175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:37.292561 1078428 cri.go:89] found id: ""
	I1210 07:52:37.292587 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.292596 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:37.292604 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:37.292661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:37.330286 1078428 cri.go:89] found id: ""
	I1210 07:52:37.330312 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.330321 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:37.330328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:37.330388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:37.362527 1078428 cri.go:89] found id: ""
	I1210 07:52:37.362555 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.362564 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:37.362570 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:37.362633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:37.387887 1078428 cri.go:89] found id: ""
	I1210 07:52:37.387912 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.387921 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:37.387927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:37.387988 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:37.412303 1078428 cri.go:89] found id: ""
	I1210 07:52:37.412329 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.412337 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:37.412344 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:37.412451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:37.436571 1078428 cri.go:89] found id: ""
	I1210 07:52:37.436596 1078428 logs.go:282] 0 containers: []
	W1210 07:52:37.436605 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:37.436614 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:37.436626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:37.462030 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:37.462074 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:37.489847 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:37.489875 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:37.545757 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:37.545792 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:37.561730 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:37.561763 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:37.627065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:37.618607    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.619149    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.620826    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.621602    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:37.623135    2450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:52:36.554875 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:39.054027 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:40.127737 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:40.139792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:40.139876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:40.166917 1078428 cri.go:89] found id: ""
	I1210 07:52:40.166944 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.166952 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:40.166964 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:40.167028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:40.193972 1078428 cri.go:89] found id: ""
	I1210 07:52:40.194000 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.194009 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:40.194015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:40.194111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:40.226660 1078428 cri.go:89] found id: ""
	I1210 07:52:40.226693 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.226702 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:40.226709 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:40.226774 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:40.257013 1078428 cri.go:89] found id: ""
	I1210 07:52:40.257056 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.257067 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:40.257074 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:40.257140 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:40.282449 1078428 cri.go:89] found id: ""
	I1210 07:52:40.282500 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.282509 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:40.282516 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:40.282580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:40.332986 1078428 cri.go:89] found id: ""
	I1210 07:52:40.333018 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.333027 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:40.333050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:40.333188 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:40.366223 1078428 cri.go:89] found id: ""
	I1210 07:52:40.366258 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.366268 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:40.366275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:40.366347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:40.393136 1078428 cri.go:89] found id: ""
	I1210 07:52:40.393163 1078428 logs.go:282] 0 containers: []
	W1210 07:52:40.393171 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:40.393181 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:40.393193 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:40.422285 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:40.422314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:40.481326 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:40.481365 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:40.497675 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:40.497725 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:40.562074 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:40.554513    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.554932    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556446    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.556761    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:40.558191    2558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:40.562093 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:40.562106 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:43.088690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:43.099750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:43.099828 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:43.124516 1078428 cri.go:89] found id: ""
	I1210 07:52:43.124552 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.124561 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:43.124567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:43.124628 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:43.153325 1078428 cri.go:89] found id: ""
	I1210 07:52:43.153347 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.153356 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:43.153362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:43.153423 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:43.178405 1078428 cri.go:89] found id: ""
	I1210 07:52:43.178429 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.178437 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:43.178443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:43.178609 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:43.201768 1078428 cri.go:89] found id: ""
	I1210 07:52:43.201791 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.201800 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:43.201806 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:43.201865 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:43.225907 1078428 cri.go:89] found id: ""
	I1210 07:52:43.225931 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.225940 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:43.225946 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:43.226004 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:43.250803 1078428 cri.go:89] found id: ""
	I1210 07:52:43.250828 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.250837 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:43.250843 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:43.250916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:43.275081 1078428 cri.go:89] found id: ""
	I1210 07:52:43.275147 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.275161 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:43.275168 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:43.275245 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:43.306794 1078428 cri.go:89] found id: ""
	I1210 07:52:43.306827 1078428 logs.go:282] 0 containers: []
	W1210 07:52:43.306836 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:43.306845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:43.306857 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:43.337826 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:43.337854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:43.396050 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:43.396089 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:43.413002 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:43.413031 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:43.479541 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:43.471065    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.471844    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.473576    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.474063    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:43.475610    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:43.479565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:43.479578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:52:41.054361 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:43.054892 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:46.005454 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:46.017579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:46.017658 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:46.053539 1078428 cri.go:89] found id: ""
	I1210 07:52:46.053570 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.053579 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:46.053585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:46.053649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:46.088548 1078428 cri.go:89] found id: ""
	I1210 07:52:46.088572 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.088581 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:46.088596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:46.088660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:46.126497 1078428 cri.go:89] found id: ""
	I1210 07:52:46.126571 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.126594 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:46.126613 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:46.126734 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:46.150556 1078428 cri.go:89] found id: ""
	I1210 07:52:46.150626 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.150643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:46.150651 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:46.150719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:46.174996 1078428 cri.go:89] found id: ""
	I1210 07:52:46.175019 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.175027 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:46.175033 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:46.175107 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:46.199701 1078428 cri.go:89] found id: ""
	I1210 07:52:46.199726 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.199735 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:46.199742 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:46.199845 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:46.224632 1078428 cri.go:89] found id: ""
	I1210 07:52:46.224657 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.224666 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:46.224672 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:46.224752 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:46.248234 1078428 cri.go:89] found id: ""
	I1210 07:52:46.248259 1078428 logs.go:282] 0 containers: []
	W1210 07:52:46.248267 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:46.248277 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:46.248334 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:46.264183 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:46.264221 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:46.342979 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:46.323053    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.323907    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328271    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.328706    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:46.338602    2764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:46.343063 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:46.343092 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:46.369476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:46.369511 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:46.397302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:46.397339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.952567 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.962857 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.962931 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.992562 1078428 cri.go:89] found id: ""
	I1210 07:52:48.992589 1078428 logs.go:282] 0 containers: []
	W1210 07:52:48.992599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.992606 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.992671 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:49.018277 1078428 cri.go:89] found id: ""
	I1210 07:52:49.018303 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.018312 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:49.018318 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:49.018387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:49.045715 1078428 cri.go:89] found id: ""
	I1210 07:52:49.045743 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.045752 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:49.045758 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:49.045826 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:49.083318 1078428 cri.go:89] found id: ""
	I1210 07:52:49.083348 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.083358 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:49.083364 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:49.083422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:49.109936 1078428 cri.go:89] found id: ""
	I1210 07:52:49.109958 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.109966 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:49.109989 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:49.110049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:49.134580 1078428 cri.go:89] found id: ""
	I1210 07:52:49.134607 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.134617 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:49.134623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:49.134681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:49.159828 1078428 cri.go:89] found id: ""
	I1210 07:52:49.159906 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.159924 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:49.159931 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:49.160011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:49.184837 1078428 cri.go:89] found id: ""
	I1210 07:52:49.184862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:49.184872 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:49.184881 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:49.184902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:49.210656 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:49.210691 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:49.241224 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:49.241256 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:49.303253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:49.303297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:49.319808 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:49.319838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:49.389423 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:49.381044    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.381468    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383366    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.383682    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:49.385122    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:52:45.554347 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:47.554702 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:50.054996 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:51.030067 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:52:51.093289 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:51.093415 1078428 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:51.889686 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.900249 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.900353 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.925533 1078428 cri.go:89] found id: ""
	I1210 07:52:51.925559 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.925567 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.925621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.925706 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.950161 1078428 cri.go:89] found id: ""
	I1210 07:52:51.950186 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.950194 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.950201 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.950280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.976938 1078428 cri.go:89] found id: ""
	I1210 07:52:51.976964 1078428 logs.go:282] 0 containers: []
	W1210 07:52:51.976972 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.976979 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.977038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:52.006745 1078428 cri.go:89] found id: ""
	I1210 07:52:52.006841 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.006865 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:52.006887 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:52.007015 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:52.033557 1078428 cri.go:89] found id: ""
	I1210 07:52:52.033585 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.033595 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:52.033601 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:52.033672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:52.066821 1078428 cri.go:89] found id: ""
	I1210 07:52:52.066850 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.066860 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:52.066867 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:52.066929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:52.101024 1078428 cri.go:89] found id: ""
	I1210 07:52:52.101051 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.101060 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:52.101067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:52.101128 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:52.130045 1078428 cri.go:89] found id: ""
	I1210 07:52:52.130070 1078428 logs.go:282] 0 containers: []
	W1210 07:52:52.130079 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:52.130088 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:52.130100 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:52.184627 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:52.184662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:52.200733 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:52.200759 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:52.265577 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:52.257022    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.257577    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259300    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.259861    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:52.261490    2996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:52.265610 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:52.265626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:52.291354 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:52.291390 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:52:52.555048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:55.054639 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:54.834203 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:54.845400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:54.845510 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.871357 1078428 cri.go:89] found id: ""
	I1210 07:52:54.871383 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.871392 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.871399 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.871463 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.897322 1078428 cri.go:89] found id: ""
	I1210 07:52:54.897352 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.897360 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.897366 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.897425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.922291 1078428 cri.go:89] found id: ""
	I1210 07:52:54.922320 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.922329 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.922334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.922405 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.947056 1078428 cri.go:89] found id: ""
	I1210 07:52:54.947080 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.947089 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.947095 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.947155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.972572 1078428 cri.go:89] found id: ""
	I1210 07:52:54.972599 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.972608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.972614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.972675 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.997657 1078428 cri.go:89] found id: ""
	I1210 07:52:54.997685 1078428 logs.go:282] 0 containers: []
	W1210 07:52:54.997694 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.997700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.997777 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:55.025796 1078428 cri.go:89] found id: ""
	I1210 07:52:55.025819 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.025829 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:55.025835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:55.026185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:55.069593 1078428 cri.go:89] found id: ""
	I1210 07:52:55.069631 1078428 logs.go:282] 0 containers: []
	W1210 07:52:55.069640 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:55.069649 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:55.069662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:55.135748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:55.135788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:55.151784 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:55.151815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:55.220457 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:55.212663    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.213305    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215040    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.215532    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:55.216531    3111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:55.220480 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:55.220495 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:55.245834 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:55.245869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.774707 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:57.785110 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:57.785178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:57.810275 1078428 cri.go:89] found id: ""
	I1210 07:52:57.810302 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.810320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:57.810328 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:52:57.810389 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.838839 1078428 cri.go:89] found id: ""
	I1210 07:52:57.838862 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.838871 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.838877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.838937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.863185 1078428 cri.go:89] found id: ""
	I1210 07:52:57.863212 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.863221 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.863227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.863287 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.890204 1078428 cri.go:89] found id: ""
	I1210 07:52:57.890234 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.890244 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.890250 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.890314 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.916593 1078428 cri.go:89] found id: ""
	I1210 07:52:57.916616 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.916624 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.916630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.916690 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.940351 1078428 cri.go:89] found id: ""
	I1210 07:52:57.940373 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.940381 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.940387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.940448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.965417 1078428 cri.go:89] found id: ""
	I1210 07:52:57.965453 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.965462 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.965469 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:52:57.965535 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:52:57.989157 1078428 cri.go:89] found id: ""
	I1210 07:52:57.989183 1078428 logs.go:282] 0 containers: []
	W1210 07:52:57.989192 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:52:57.989202 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:52:57.989213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:52:58.015326 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:52:58.015366 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:58.055222 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:58.055248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:58.115866 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:58.115945 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:58.131823 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:58.131852 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:58.196880 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:58.188547    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.189067    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.190600    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.191285    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:52:58.192890    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:57.402101 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:52:57.460754 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:57.460865 1077343 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:52:57.554262 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:52:59.503589 1077343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:52:59.554549 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:52:59.576553 1077343 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:52:59.576655 1077343 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:52:59.579701 1077343 out.go:179] * Enabled addons: 
	I1210 07:52:59.582536 1077343 addons.go:530] duration metric: took 1m41.60352286s for enable addons: enabled=[]
	I1210 07:53:00.697148 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:00.707593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:00.707661 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:00.735938 1078428 cri.go:89] found id: ""
	I1210 07:53:00.735962 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.735971 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:00.735977 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:00.736039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:00.759785 1078428 cri.go:89] found id: ""
	I1210 07:53:00.759808 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.759817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:00.759823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:00.759887 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:00.784529 1078428 cri.go:89] found id: ""
	I1210 07:53:00.784552 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.784561 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:00.784567 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:00.784641 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.813420 1078428 cri.go:89] found id: ""
	I1210 07:53:00.813443 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.813452 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.813459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.813518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.838413 1078428 cri.go:89] found id: ""
	I1210 07:53:00.838439 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.838449 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.838455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.838559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.862923 1078428 cri.go:89] found id: ""
	I1210 07:53:00.862949 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.862968 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.862975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.863034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.890339 1078428 cri.go:89] found id: ""
	I1210 07:53:00.890366 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.890375 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.890381 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:00.890440 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:00.916963 1078428 cri.go:89] found id: ""
	I1210 07:53:00.916992 1078428 logs.go:282] 0 containers: []
	W1210 07:53:00.917001 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:00.917010 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.917022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.972565 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.972601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:00.990064 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.990154 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:01.068497 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:01.060518    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.061332    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.062990    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.063328    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:01.064654    3335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:01.068521 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:01.068534 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:01.097602 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:01.097641 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.628666 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.639440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.639518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.664498 1078428 cri.go:89] found id: ""
	I1210 07:53:03.664523 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.664531 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.664538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.664601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.688357 1078428 cri.go:89] found id: ""
	I1210 07:53:03.688382 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.688391 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.688397 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.688460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.712874 1078428 cri.go:89] found id: ""
	I1210 07:53:03.712898 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.712906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.712913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.712990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.737610 1078428 cri.go:89] found id: ""
	I1210 07:53:03.737635 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.737643 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.737650 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.737712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.762668 1078428 cri.go:89] found id: ""
	I1210 07:53:03.762695 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.762703 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.762710 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.762769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.795710 1078428 cri.go:89] found id: ""
	I1210 07:53:03.795732 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.795741 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.795747 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.795809 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.819247 1078428 cri.go:89] found id: ""
	I1210 07:53:03.819275 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.819285 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.819291 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:03.819355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:03.842854 1078428 cri.go:89] found id: ""
	I1210 07:53:03.842881 1078428 logs.go:282] 0 containers: []
	W1210 07:53:03.842891 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:03.842900 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.842911 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.858681 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.858748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.922352 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.913909    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.914521    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916231    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.916791    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:03.918447    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.922383 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:03.922401 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:03.948481 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.948520 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:03.977218 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.977247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:02.054010 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:04.555038 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:06.532410 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.544357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.544451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.576472 1078428 cri.go:89] found id: ""
	I1210 07:53:06.576500 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.576511 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.576517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.576581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.609024 1078428 cri.go:89] found id: ""
	I1210 07:53:06.609051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.609061 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.609067 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.609134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.636182 1078428 cri.go:89] found id: ""
	I1210 07:53:06.636209 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.636218 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.636224 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.636286 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.664610 1078428 cri.go:89] found id: ""
	I1210 07:53:06.664677 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.664699 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.664720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.664812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.690522 1078428 cri.go:89] found id: ""
	I1210 07:53:06.690548 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.690557 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.690564 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.690626 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.716006 1078428 cri.go:89] found id: ""
	I1210 07:53:06.716035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.716044 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.716050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.716115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.740705 1078428 cri.go:89] found id: ""
	I1210 07:53:06.740726 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.740734 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.740741 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:06.740803 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:06.764831 1078428 cri.go:89] found id: ""
	I1210 07:53:06.764852 1078428 logs.go:282] 0 containers: []
	W1210 07:53:06.764860 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:06.764869 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.764881 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.820337 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.820372 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:06.836899 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.836931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.902143 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.893655    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.894319    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.895838    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.896417    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:06.898017    3559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.902164 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:06.902178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:06.927253 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.927289 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.458854 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:09.469382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:09.469466 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.494769 1078428 cri.go:89] found id: ""
	I1210 07:53:09.494791 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.494799 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.494805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.494866 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1210 07:53:07.053986 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:09.554520 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:09.520347 1078428 cri.go:89] found id: ""
	I1210 07:53:09.520374 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.520383 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.520390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.520454 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.549983 1078428 cri.go:89] found id: ""
	I1210 07:53:09.550010 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.550019 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.550025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.550085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.588794 1078428 cri.go:89] found id: ""
	I1210 07:53:09.588821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.588830 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.588836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.588895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.617370 1078428 cri.go:89] found id: ""
	I1210 07:53:09.617393 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.617401 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.617407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.617465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.645730 1078428 cri.go:89] found id: ""
	I1210 07:53:09.645755 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.645779 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.645786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.645850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.672062 1078428 cri.go:89] found id: ""
	I1210 07:53:09.672088 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.672097 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.672103 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:09.672174 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:09.695770 1078428 cri.go:89] found id: ""
	I1210 07:53:09.695793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:09.695802 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:09.695811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:09.695822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:09.721144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.721180 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.748337 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.748367 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.802348 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.802384 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.818196 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.818226 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.884770 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.876535    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.877364    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879072    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.879398    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:09.880893    3688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.385627 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:12.396288 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:12.396367 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:12.421158 1078428 cri.go:89] found id: ""
	I1210 07:53:12.421194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.421204 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:12.421210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:12.421281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:12.446171 1078428 cri.go:89] found id: ""
	I1210 07:53:12.446206 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.446216 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:12.446222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:12.446294 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.470791 1078428 cri.go:89] found id: ""
	I1210 07:53:12.470818 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.470828 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.470836 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.470895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.499441 1078428 cri.go:89] found id: ""
	I1210 07:53:12.499467 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.499476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.499483 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.499561 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.524188 1078428 cri.go:89] found id: ""
	I1210 07:53:12.524211 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.524219 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.524225 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.524285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.550501 1078428 cri.go:89] found id: ""
	I1210 07:53:12.550528 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.550537 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.550543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.550617 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.578576 1078428 cri.go:89] found id: ""
	I1210 07:53:12.578602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.578611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.578616 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:12.578687 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:12.612078 1078428 cri.go:89] found id: ""
	I1210 07:53:12.612113 1078428 logs.go:282] 0 containers: []
	W1210 07:53:12.612122 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:12.612132 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.612144 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:12.645096 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.645125 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.700179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.700217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.715578 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.715606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.781369 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.772466    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.773589    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.774343    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.775989    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:12.776533    3798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.781391 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:12.781403 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:53:11.554633 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:14.054508 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:15.306176 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:15.317232 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:15.317315 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:15.336640 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:53:15.353595 1078428 cri.go:89] found id: ""
	I1210 07:53:15.353626 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.353635 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:15.353642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:15.353703 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1210 07:53:15.421893 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:15.421994 1078428 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:15.422157 1078428 cri.go:89] found id: ""
	I1210 07:53:15.422177 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.422185 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:15.422192 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:15.422270 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:15.447660 1078428 cri.go:89] found id: ""
	I1210 07:53:15.447684 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.447693 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:15.447699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:15.447763 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:15.471893 1078428 cri.go:89] found id: ""
	I1210 07:53:15.471918 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.471927 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:15.471934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:15.472003 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.496880 1078428 cri.go:89] found id: ""
	I1210 07:53:15.496915 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.496924 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.496930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.496999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.525007 1078428 cri.go:89] found id: ""
	I1210 07:53:15.525043 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.525055 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.525061 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.525138 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.556732 1078428 cri.go:89] found id: ""
	I1210 07:53:15.556776 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.556785 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.556792 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:15.556864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:15.592802 1078428 cri.go:89] found id: ""
	I1210 07:53:15.592835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:15.592844 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:15.592854 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.592866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.660809 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.660846 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.677009 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.677040 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.743204 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.734495    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.735207    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.736858    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.737446    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:15.739134    3906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.743227 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:15.743239 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:15.768020 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.768053 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:18.297028 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:18.310128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:18.310198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:18.340476 1078428 cri.go:89] found id: ""
	I1210 07:53:18.340572 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.340599 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:18.340642 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:18.340769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:18.369516 1078428 cri.go:89] found id: ""
	I1210 07:53:18.369582 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.369614 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:18.369633 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:18.369753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:18.396295 1078428 cri.go:89] found id: ""
	I1210 07:53:18.396321 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.396330 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:18.396336 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:18.396428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:18.422012 1078428 cri.go:89] found id: ""
	I1210 07:53:18.422037 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.422046 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:18.422052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:18.422164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:18.446495 1078428 cri.go:89] found id: ""
	I1210 07:53:18.446518 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.446526 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:18.446532 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:18.446600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:18.471650 1078428 cri.go:89] found id: ""
	I1210 07:53:18.471674 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.471682 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:18.471688 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:18.471779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.495591 1078428 cri.go:89] found id: ""
	I1210 07:53:18.495616 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.495624 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.495631 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:18.495694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:18.523464 1078428 cri.go:89] found id: ""
	I1210 07:53:18.523489 1078428 logs.go:282] 0 containers: []
	W1210 07:53:18.523497 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:18.523506 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.523518 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.585434 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.585481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.610315 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.610344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.674572 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.666289    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.667061    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.668721    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.669016    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:18.670764    4015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.674593 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:18.674607 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:18.699401 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.699435 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:19.389521 1078428 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:53:19.452005 1078428 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:53:19.452105 1078428 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:53:19.455408 1078428 out.go:179] * Enabled addons: 
	I1210 07:53:19.458237 1078428 addons.go:530] duration metric: took 1m57.316864384s for enable addons: enabled=[]
	W1210 07:53:16.054718 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:18.554815 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:21.227168 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:21.237506 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:21.237577 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:21.261812 1078428 cri.go:89] found id: ""
	I1210 07:53:21.261842 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.261852 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:21.261858 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:21.261921 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:21.289741 1078428 cri.go:89] found id: ""
	I1210 07:53:21.289767 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.289787 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:21.289794 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:21.289855 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:21.331373 1078428 cri.go:89] found id: ""
	I1210 07:53:21.331400 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.331410 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:21.331415 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:21.331534 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:21.364401 1078428 cri.go:89] found id: ""
	I1210 07:53:21.364427 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.364436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:21.364443 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:21.364504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:21.395936 1078428 cri.go:89] found id: ""
	I1210 07:53:21.395965 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.395975 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:21.395981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:21.396044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:21.420965 1078428 cri.go:89] found id: ""
	I1210 07:53:21.420996 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.421005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:21.421012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:21.421073 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:21.446318 1078428 cri.go:89] found id: ""
	I1210 07:53:21.446345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.446354 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:21.446360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:21.446422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:21.475470 1078428 cri.go:89] found id: ""
	I1210 07:53:21.475499 1078428 logs.go:282] 0 containers: []
	W1210 07:53:21.475509 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:21.475521 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:21.475537 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.530313 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.530354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.548651 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.548737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.632055 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.623055    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.623614    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625291    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.625976    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:21.627769    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.632137 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:21.632157 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:21.659428 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.659466 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.192421 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:24.203056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:24.203137 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:24.232457 1078428 cri.go:89] found id: ""
	I1210 07:53:24.232493 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.232502 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:24.232509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:24.232576 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:24.260730 1078428 cri.go:89] found id: ""
	I1210 07:53:24.260758 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.260768 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:24.260774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:24.260837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:24.284981 1078428 cri.go:89] found id: ""
	I1210 07:53:24.285009 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.285018 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:24.285024 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:24.285086 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:24.316578 1078428 cri.go:89] found id: ""
	I1210 07:53:24.316604 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.316613 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:24.316619 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:24.316678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:24.353587 1078428 cri.go:89] found id: ""
	I1210 07:53:24.353622 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.353638 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:24.353645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:24.353740 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:24.384460 1078428 cri.go:89] found id: ""
	I1210 07:53:24.384483 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.384492 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:24.384498 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:24.384562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:24.414252 1078428 cri.go:89] found id: ""
	I1210 07:53:24.414280 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.414290 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:24.414296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:24.414361 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:24.442225 1078428 cri.go:89] found id: ""
	I1210 07:53:24.442247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:24.442256 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:24.442265 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:24.442276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:24.467596 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.467629 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:21.054852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:23.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:24.499949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.499977 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.558185 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.558223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:24.576232 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:24.576264 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:24.646699 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:24.638205    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639089    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.639811    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641363    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:24.641799    4260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.148382 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:27.158984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:27.159102 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:27.183857 1078428 cri.go:89] found id: ""
	I1210 07:53:27.183927 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.183943 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:27.183951 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:27.184028 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:27.207461 1078428 cri.go:89] found id: ""
	I1210 07:53:27.207529 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.207554 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:27.207568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:27.207645 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:27.234849 1078428 cri.go:89] found id: ""
	I1210 07:53:27.234876 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.234884 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:27.234890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:27.234948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:27.258887 1078428 cri.go:89] found id: ""
	I1210 07:53:27.258910 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.258919 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:27.258926 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:27.258983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:27.283113 1078428 cri.go:89] found id: ""
	I1210 07:53:27.283189 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.283206 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:27.283214 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:27.283283 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:27.324968 1078428 cri.go:89] found id: ""
	I1210 07:53:27.324994 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.325004 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:27.325010 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:27.325070 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:27.355711 1078428 cri.go:89] found id: ""
	I1210 07:53:27.355739 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.355749 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:27.355755 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:27.355817 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:27.383387 1078428 cri.go:89] found id: ""
	I1210 07:53:27.383424 1078428 logs.go:282] 0 containers: []
	W1210 07:53:27.383435 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:27.383445 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:27.383456 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:27.408324 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:27.408363 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:27.438348 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:27.438424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:27.496282 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:27.496317 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:27.512354 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:27.512385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.586988 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.577963    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.578714    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580435    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.580907    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:27.582816    4367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:53:26.054246 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:28.554092 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:30.088030 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:30.100373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:30.100449 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:30.127922 1078428 cri.go:89] found id: ""
	I1210 07:53:30.127998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.128023 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:30.128041 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:30.128120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:30.160672 1078428 cri.go:89] found id: ""
	I1210 07:53:30.160699 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.160709 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:30.160722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:30.160784 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:30.186050 1078428 cri.go:89] found id: ""
	I1210 07:53:30.186077 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.186086 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:30.186093 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:30.186157 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:30.211107 1078428 cri.go:89] found id: ""
	I1210 07:53:30.211132 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.211141 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:30.211147 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:30.211213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:30.235571 1078428 cri.go:89] found id: ""
	I1210 07:53:30.235598 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.235608 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:30.235615 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:30.235678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:30.264308 1078428 cri.go:89] found id: ""
	I1210 07:53:30.264331 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.264339 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:30.264346 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:30.264413 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:30.288489 1078428 cri.go:89] found id: ""
	I1210 07:53:30.288557 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.288581 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:30.288594 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:30.288673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:30.318600 1078428 cri.go:89] found id: ""
	I1210 07:53:30.318628 1078428 logs.go:282] 0 containers: []
	W1210 07:53:30.318638 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:30.318648 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:30.318679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:30.359074 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:30.359103 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.417146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.417182 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:30.432931 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:30.432960 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:30.497452 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:30.488702    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.489502    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491238    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.491784    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:30.493510    4479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:30.497474 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:30.497487 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.027579 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:33.038128 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:33.038197 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:33.063535 1078428 cri.go:89] found id: ""
	I1210 07:53:33.063560 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.063572 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:33.063578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:33.063642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:33.087384 1078428 cri.go:89] found id: ""
	I1210 07:53:33.087406 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.087414 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:33.087420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:33.087478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:33.112186 1078428 cri.go:89] found id: ""
	I1210 07:53:33.112247 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.112258 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:33.112265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:33.112326 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:33.136102 1078428 cri.go:89] found id: ""
	I1210 07:53:33.136125 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.136133 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:33.136139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:33.136202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:33.160865 1078428 cri.go:89] found id: ""
	I1210 07:53:33.160931 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.160957 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:33.160986 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:33.161071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:33.185964 1078428 cri.go:89] found id: ""
	I1210 07:53:33.186031 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.186054 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:33.186075 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:33.186150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:33.211060 1078428 cri.go:89] found id: ""
	I1210 07:53:33.211086 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.211095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:33.211100 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:33.211180 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:33.236111 1078428 cri.go:89] found id: ""
	I1210 07:53:33.236180 1078428 logs.go:282] 0 containers: []
	W1210 07:53:33.236213 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:33.236227 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.236251 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:33.252003 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:33.252029 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:33.315902 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:33.308251    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.308659    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310144    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.310442    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:33.311844    4578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:33.315967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:33.316003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:33.342524 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:33.342604 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:33.377391 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:33.377419 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:30.554186 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:33.054061 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:35.054801 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:35.933860 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.945070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.945142 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.971394 1078428 cri.go:89] found id: ""
	I1210 07:53:35.971423 1078428 logs.go:282] 0 containers: []
	W1210 07:53:35.971432 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.971438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.971501 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:36.005170 1078428 cri.go:89] found id: ""
	I1210 07:53:36.005227 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.005240 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:36.005248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:36.005329 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:36.035275 1078428 cri.go:89] found id: ""
	I1210 07:53:36.035299 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.035307 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:36.035313 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:36.035380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:36.060232 1078428 cri.go:89] found id: ""
	I1210 07:53:36.060255 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.060266 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:36.060272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:36.060336 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:36.084825 1078428 cri.go:89] found id: ""
	I1210 07:53:36.084850 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.084859 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:36.084866 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:36.084955 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:36.110606 1078428 cri.go:89] found id: ""
	I1210 07:53:36.110630 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.110639 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:36.110664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:36.110728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:36.139205 1078428 cri.go:89] found id: ""
	I1210 07:53:36.139232 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.139241 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:36.139248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:36.139358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:36.165255 1078428 cri.go:89] found id: ""
	I1210 07:53:36.165279 1078428 logs.go:282] 0 containers: []
	W1210 07:53:36.165287 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:36.165296 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:36.165308 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:36.190967 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:36.191003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:36.228036 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:36.228070 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:36.283588 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:36.283626 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:36.308631 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:36.308660 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:36.382721 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:36.374555    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.375219    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.376727    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.377183    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:36.378650    4707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.882925 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.893611 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.893738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.919385 1078428 cri.go:89] found id: ""
	I1210 07:53:38.919418 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.919427 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.919433 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.919504 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.943787 1078428 cri.go:89] found id: ""
	I1210 07:53:38.943814 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.943824 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.943832 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.943896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.968361 1078428 cri.go:89] found id: ""
	I1210 07:53:38.968433 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.968451 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.968458 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.968520 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.995636 1078428 cri.go:89] found id: ""
	I1210 07:53:38.995661 1078428 logs.go:282] 0 containers: []
	W1210 07:53:38.995670 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.995677 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.995754 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:39.021416 1078428 cri.go:89] found id: ""
	I1210 07:53:39.021452 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.021462 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:39.021470 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:39.021552 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:39.048415 1078428 cri.go:89] found id: ""
	I1210 07:53:39.048441 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.048450 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:39.048456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:39.048545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:39.074528 1078428 cri.go:89] found id: ""
	I1210 07:53:39.074554 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.074563 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:39.074569 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:39.074633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:39.099525 1078428 cri.go:89] found id: ""
	I1210 07:53:39.099551 1078428 logs.go:282] 0 containers: []
	W1210 07:53:39.099571 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:39.099581 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:39.099594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:39.166056 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:39.157362    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.157880    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.159752    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.160310    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:39.161990    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:39.166080 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:39.166094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:39.191445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:39.191482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:39.221901 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:39.221931 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:39.276698 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:39.276735 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:53:37.554212 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:40.054014 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:41.793231 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.806351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.806419 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.833486 1078428 cri.go:89] found id: ""
	I1210 07:53:41.833508 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.833517 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.833523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.833587 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.863627 1078428 cri.go:89] found id: ""
	I1210 07:53:41.863650 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.863659 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.863665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.863723 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.891468 1078428 cri.go:89] found id: ""
	I1210 07:53:41.891492 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.891502 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.891509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.891575 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.916517 1078428 cri.go:89] found id: ""
	I1210 07:53:41.916542 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.916550 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.916557 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.916616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.942528 1078428 cri.go:89] found id: ""
	I1210 07:53:41.942555 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.942577 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.942584 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.942646 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.966600 1078428 cri.go:89] found id: ""
	I1210 07:53:41.966624 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.966633 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.966639 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.966707 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.990797 1078428 cri.go:89] found id: ""
	I1210 07:53:41.990831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:41.990840 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.990846 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:41.990914 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:42.024121 1078428 cri.go:89] found id: ""
	I1210 07:53:42.024148 1078428 logs.go:282] 0 containers: []
	W1210 07:53:42.024158 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:42.024169 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:42.024181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:42.080753 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:42.080799 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:42.098930 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:42.098965 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:42.176005 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:42.165806    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.166811    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.168570    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.169232    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:42.170912    4918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:42.176075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:42.176108 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:42.205998 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:42.206045 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:42.054513 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:44.553993 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:44.740690 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.751788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.751908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.777536 1078428 cri.go:89] found id: ""
	I1210 07:53:44.777563 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.777571 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.777578 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.777640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.805133 1078428 cri.go:89] found id: ""
	I1210 07:53:44.805161 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.805170 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.805176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.805237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.842340 1078428 cri.go:89] found id: ""
	I1210 07:53:44.842368 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.842383 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.842390 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.842451 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.875009 1078428 cri.go:89] found id: ""
	I1210 07:53:44.875035 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.875044 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.875050 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.875144 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.900854 1078428 cri.go:89] found id: ""
	I1210 07:53:44.900880 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.900889 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.900895 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.900993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.926168 1078428 cri.go:89] found id: ""
	I1210 07:53:44.926194 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.926203 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.926210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.926302 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.951565 1078428 cri.go:89] found id: ""
	I1210 07:53:44.951590 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.951599 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.951605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:44.951700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:44.981123 1078428 cri.go:89] found id: ""
	I1210 07:53:44.981151 1078428 logs.go:282] 0 containers: []
	W1210 07:53:44.981160 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:44.981170 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.981181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:45.061176 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:45.048172    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.049203    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052057    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.052627    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:45.055814    5025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:45.061213 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:45.061227 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:45.119245 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:45.119283 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:45.172398 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:45.172430 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:45.255583 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:45.255726 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.779428 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.790537 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.790611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.831579 1078428 cri.go:89] found id: ""
	I1210 07:53:47.831602 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.831610 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.831617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.831677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.859808 1078428 cri.go:89] found id: ""
	I1210 07:53:47.859835 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.859844 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.859850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.859916 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.885720 1078428 cri.go:89] found id: ""
	I1210 07:53:47.885745 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.885754 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.885761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.885829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.910568 1078428 cri.go:89] found id: ""
	I1210 07:53:47.910594 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.910604 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.910610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.910668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.934447 1078428 cri.go:89] found id: ""
	I1210 07:53:47.934495 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.934505 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.934511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.934571 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.959745 1078428 cri.go:89] found id: ""
	I1210 07:53:47.959772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.959782 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.959788 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.959871 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.984059 1078428 cri.go:89] found id: ""
	I1210 07:53:47.984085 1078428 logs.go:282] 0 containers: []
	W1210 07:53:47.984095 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.984102 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:47.984163 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:48.011978 1078428 cri.go:89] found id: ""
	I1210 07:53:48.012007 1078428 logs.go:282] 0 containers: []
	W1210 07:53:48.012018 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:48.012030 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:48.012043 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:48.069700 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:48.069738 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:48.086303 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:48.086345 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:48.160973 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:48.152656    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.153231    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.154815    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.155339    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:48.156984    5144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:48.160994 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:48.161008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:48.185832 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:48.185868 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:53:46.554777 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:49.054179 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:50.713469 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.724372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.724452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.750268 1078428 cri.go:89] found id: ""
	I1210 07:53:50.750292 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.750300 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.750306 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.750368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.776624 1078428 cri.go:89] found id: ""
	I1210 07:53:50.776689 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.776704 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.776711 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.776769 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.807024 1078428 cri.go:89] found id: ""
	I1210 07:53:50.807051 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.807060 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.807070 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.807127 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.851753 1078428 cri.go:89] found id: ""
	I1210 07:53:50.851831 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.851855 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.851879 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.852000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.878419 1078428 cri.go:89] found id: ""
	I1210 07:53:50.878571 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.878589 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.878597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.878667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.904710 1078428 cri.go:89] found id: ""
	I1210 07:53:50.904741 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.904750 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.904756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.904819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.929368 1078428 cri.go:89] found id: ""
	I1210 07:53:50.929398 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.929421 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.929428 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:50.929495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:50.956973 1078428 cri.go:89] found id: ""
	I1210 07:53:50.956998 1078428 logs.go:282] 0 containers: []
	W1210 07:53:50.957006 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:50.957016 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:50.957028 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:50.982743 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.982778 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:51.015675 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:51.015706 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:51.072656 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:51.072697 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:51.089028 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:51.089115 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:51.156089 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:51.147578    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.148310    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150005    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.150357    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:51.151933    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.657305 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.668282 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.668364 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.693314 1078428 cri.go:89] found id: ""
	I1210 07:53:53.693340 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.693349 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.693356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.693417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.718128 1078428 cri.go:89] found id: ""
	I1210 07:53:53.718154 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.718169 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.718176 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.718234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.744359 1078428 cri.go:89] found id: ""
	I1210 07:53:53.744397 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.744406 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.744412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.744485 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.773658 1078428 cri.go:89] found id: ""
	I1210 07:53:53.773737 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.773760 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.773782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.773879 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.804702 1078428 cri.go:89] found id: ""
	I1210 07:53:53.804772 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.804796 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.804815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.804905 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.840639 1078428 cri.go:89] found id: ""
	I1210 07:53:53.840706 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.840730 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.840753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.840846 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.869303 1078428 cri.go:89] found id: ""
	I1210 07:53:53.869373 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.869397 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.869419 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:53.869508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:53.898651 1078428 cri.go:89] found id: ""
	I1210 07:53:53.898742 1078428 logs.go:282] 0 containers: []
	W1210 07:53:53.898764 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:53.898787 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:53.898821 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:53.924144 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.924181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.953086 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.953118 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:54.008451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:54.008555 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:54.027281 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:54.027312 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:54.091065 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:54.082296    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.083017    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.084636    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.085187    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:54.086808    5384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:53:51.054819 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:53.554121 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:56.591259 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.602391 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.602493 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.627566 1078428 cri.go:89] found id: ""
	I1210 07:53:56.627597 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.627607 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.627614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.627677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.654900 1078428 cri.go:89] found id: ""
	I1210 07:53:56.654928 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.654937 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.654944 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.655007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.679562 1078428 cri.go:89] found id: ""
	I1210 07:53:56.679592 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.679606 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.679612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.679737 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.703320 1078428 cri.go:89] found id: ""
	I1210 07:53:56.703345 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.703355 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.703361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.703420 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.731538 1078428 cri.go:89] found id: ""
	I1210 07:53:56.731564 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.731573 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.731579 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.731664 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.756416 1078428 cri.go:89] found id: ""
	I1210 07:53:56.756442 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.756451 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.756457 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.756523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.785074 1078428 cri.go:89] found id: ""
	I1210 07:53:56.785097 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.785106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.785111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:56.785171 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:56.815793 1078428 cri.go:89] found id: ""
	I1210 07:53:56.815821 1078428 logs.go:282] 0 containers: []
	W1210 07:53:56.815831 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:56.815842 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.815856 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.834351 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.834380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.907823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.899947    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.900451    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.901995    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.902428    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:56.903899    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.907857 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:56.907871 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:56.933197 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.933233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.964346 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.964378 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:53:55.554659 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:53:58.054078 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:00.054143 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:53:59.520946 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.531324 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.531414 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.563870 1078428 cri.go:89] found id: ""
	I1210 07:53:59.563897 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.563907 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.563913 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.564000 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.593355 1078428 cri.go:89] found id: ""
	I1210 07:53:59.593385 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.593394 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.593400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.593468 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.620235 1078428 cri.go:89] found id: ""
	I1210 07:53:59.620263 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.620272 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.620278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.620338 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.645074 1078428 cri.go:89] found id: ""
	I1210 07:53:59.645099 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.645108 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.645114 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.645178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.673804 1078428 cri.go:89] found id: ""
	I1210 07:53:59.673830 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.673839 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.673845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.673902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.697766 1078428 cri.go:89] found id: ""
	I1210 07:53:59.697793 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.697803 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.697810 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.697868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.725582 1078428 cri.go:89] found id: ""
	I1210 07:53:59.725608 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.725617 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.725623 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:53:59.725681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:53:59.750402 1078428 cri.go:89] found id: ""
	I1210 07:53:59.750428 1078428 logs.go:282] 0 containers: []
	W1210 07:53:59.750437 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:53:59.750447 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:53:59.750458 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:53:59.775346 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.775383 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.815776 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.815804 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.876120 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.876164 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.897440 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.897470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.962486 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.954416    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.955115    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956695    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.956999    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:53:59.958498    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.463154 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.473950 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.474039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.498884 1078428 cri.go:89] found id: ""
	I1210 07:54:02.498907 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.498916 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.498923 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.498982 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.523553 1078428 cri.go:89] found id: ""
	I1210 07:54:02.523582 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.523591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.523597 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.552876 1078428 cri.go:89] found id: ""
	I1210 07:54:02.552902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.552911 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.552918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.552976 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.583793 1078428 cri.go:89] found id: ""
	I1210 07:54:02.583818 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.583827 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.583833 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.583895 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.625932 1078428 cri.go:89] found id: ""
	I1210 07:54:02.625959 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.625969 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.625976 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.626044 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.652709 1078428 cri.go:89] found id: ""
	I1210 07:54:02.652784 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.652800 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.652808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.652868 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.680830 1078428 cri.go:89] found id: ""
	I1210 07:54:02.680859 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.680868 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.680874 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:02.680933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:02.706663 1078428 cri.go:89] found id: ""
	I1210 07:54:02.706687 1078428 logs.go:282] 0 containers: []
	W1210 07:54:02.706696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:02.706704 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.706715 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.763069 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.763105 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.779309 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.779340 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.864302 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.854179    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.854989    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858064    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.858689    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:02.860317    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.864326 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:02.864339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:02.890235 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.890274 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:02.554570 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:04.555006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:05.418128 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.429523 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.429604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.456726 1078428 cri.go:89] found id: ""
	I1210 07:54:05.456755 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.456765 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.456772 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.456851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.485039 1078428 cri.go:89] found id: ""
	I1210 07:54:05.485065 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.485074 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.485080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.485169 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.510634 1078428 cri.go:89] found id: ""
	I1210 07:54:05.510658 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.510668 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.510674 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.510733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.536710 1078428 cri.go:89] found id: ""
	I1210 07:54:05.536743 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.536753 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.536760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.536848 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.568911 1078428 cri.go:89] found id: ""
	I1210 07:54:05.568991 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.569015 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.569040 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.569150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.598888 1078428 cri.go:89] found id: ""
	I1210 07:54:05.598964 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.598987 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.599007 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.599101 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.630665 1078428 cri.go:89] found id: ""
	I1210 07:54:05.630741 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.630771 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.630779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:05.630850 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:05.654676 1078428 cri.go:89] found id: ""
	I1210 07:54:05.654702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:05.654712 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:05.654722 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.654733 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.712685 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.712722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.728743 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.728774 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.807287 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.790159    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.790981    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.792596    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.793583    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:05.794194    5812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.807311 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:05.807325 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:05.835209 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.835246 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.367017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.377830 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.377904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.402753 1078428 cri.go:89] found id: ""
	I1210 07:54:08.402778 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.402787 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.402795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.402856 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.427920 1078428 cri.go:89] found id: ""
	I1210 07:54:08.427947 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.427956 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.427963 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.428021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.453012 1078428 cri.go:89] found id: ""
	I1210 07:54:08.453037 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.453045 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.453052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.453114 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.477565 1078428 cri.go:89] found id: ""
	I1210 07:54:08.477591 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.477606 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.477612 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.477673 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.501669 1078428 cri.go:89] found id: ""
	I1210 07:54:08.501694 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.501740 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.501750 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.501816 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.530594 1078428 cri.go:89] found id: ""
	I1210 07:54:08.530667 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.530704 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.530719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.530799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.561145 1078428 cri.go:89] found id: ""
	I1210 07:54:08.561171 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.561179 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.561186 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:08.561244 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:08.595663 1078428 cri.go:89] found id: ""
	I1210 07:54:08.595686 1078428 logs.go:282] 0 containers: []
	W1210 07:54:08.595695 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:08.595706 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:08.595718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:08.622963 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.623002 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:08.652801 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.652829 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.708272 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.708307 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.724144 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.724174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.790000 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.782113    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.782760    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784347    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.784840    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:08.786314    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:07.054035 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:09.054348 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:11.291584 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.302037 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.302111 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.331607 1078428 cri.go:89] found id: ""
	I1210 07:54:11.331631 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.331640 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.331646 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.331711 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.355008 1078428 cri.go:89] found id: ""
	I1210 07:54:11.355031 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.355039 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.355045 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.355104 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.380347 1078428 cri.go:89] found id: ""
	I1210 07:54:11.380423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.380463 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.380485 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.380572 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.410797 1078428 cri.go:89] found id: ""
	I1210 07:54:11.410824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.410834 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.410840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.410898 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.435927 1078428 cri.go:89] found id: ""
	I1210 07:54:11.435996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.436021 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.436035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.436109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.461484 1078428 cri.go:89] found id: ""
	I1210 07:54:11.461520 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.461529 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.461536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.461603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.486793 1078428 cri.go:89] found id: ""
	I1210 07:54:11.486817 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.486825 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.486831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:11.486890 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:11.515338 1078428 cri.go:89] found id: ""
	I1210 07:54:11.515364 1078428 logs.go:282] 0 containers: []
	W1210 07:54:11.515374 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:11.515384 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.515396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.593473 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.585339    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.586129    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.587754    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.588062    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:11.589542    6030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.593495 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:11.593509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:11.619492 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.619523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.646739 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.646771 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.701149 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.701187 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.217342 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:14.228228 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:14.228306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.254323 1078428 cri.go:89] found id: ""
	I1210 07:54:14.254360 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.254369 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.254375 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.254443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.279268 1078428 cri.go:89] found id: ""
	I1210 07:54:14.279295 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.279303 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.279310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.279397 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.304531 1078428 cri.go:89] found id: ""
	I1210 07:54:14.304558 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.304567 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.304574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.304647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.329458 1078428 cri.go:89] found id: ""
	I1210 07:54:14.329487 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.329496 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.329502 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.329563 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.359168 1078428 cri.go:89] found id: ""
	I1210 07:54:14.359241 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.359258 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.359266 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.359348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.386391 1078428 cri.go:89] found id: ""
	I1210 07:54:14.386426 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.386435 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.386442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.386540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.411808 1078428 cri.go:89] found id: ""
	I1210 07:54:14.411843 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.411862 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.411870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:14.411946 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:14.440262 1078428 cri.go:89] found id: ""
	I1210 07:54:14.440292 1078428 logs.go:282] 0 containers: []
	W1210 07:54:14.440301 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:14.440311 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.440322 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:54:11.553952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:13.554999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:14.496340 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.496376 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:14.512934 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.512963 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.584969 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.576398    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.577208    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.578910    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.579485    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:14.581005    6147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.585042 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:14.585069 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:14.615045 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.615086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.146612 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:17.157236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:17.157307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:17.184080 1078428 cri.go:89] found id: ""
	I1210 07:54:17.184102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.184111 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:17.184117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:17.184177 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:17.212720 1078428 cri.go:89] found id: ""
	I1210 07:54:17.212745 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.212754 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:17.212760 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:17.212822 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.238495 1078428 cri.go:89] found id: ""
	I1210 07:54:17.238521 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.238529 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.238542 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.238603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.262892 1078428 cri.go:89] found id: ""
	I1210 07:54:17.262921 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.262930 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.262936 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.262996 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.291473 1078428 cri.go:89] found id: ""
	I1210 07:54:17.291498 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.291508 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.291514 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.291573 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.317108 1078428 cri.go:89] found id: ""
	I1210 07:54:17.317133 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.317142 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.317149 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.317209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.344918 1078428 cri.go:89] found id: ""
	I1210 07:54:17.344944 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.344953 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.344959 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:17.345019 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:17.370082 1078428 cri.go:89] found id: ""
	I1210 07:54:17.370109 1078428 logs.go:282] 0 containers: []
	W1210 07:54:17.370118 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:17.370128 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.370139 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:17.427357 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.427407 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.443363 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.443393 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.509516 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.501130    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.501831    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503489    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.503992    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:17.505575    6261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.509538 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:17.509551 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:17.535043 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.535078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:16.053965 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:18.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:20.071194 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:20.083928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:20.084059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:20.119958 1078428 cri.go:89] found id: ""
	I1210 07:54:20.119987 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.119996 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:20.120002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:20.120062 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:20.144861 1078428 cri.go:89] found id: ""
	I1210 07:54:20.144883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.144891 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:20.144897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:20.144957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:20.180042 1078428 cri.go:89] found id: ""
	I1210 07:54:20.180069 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.180078 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:20.180085 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:20.180151 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.208390 1078428 cri.go:89] found id: ""
	I1210 07:54:20.208423 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.208432 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.208439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.208511 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.234337 1078428 cri.go:89] found id: ""
	I1210 07:54:20.234358 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.234367 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.234373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.234441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.263116 1078428 cri.go:89] found id: ""
	I1210 07:54:20.263138 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.263146 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.263153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.263213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.287115 1078428 cri.go:89] found id: ""
	I1210 07:54:20.287188 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.287203 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.287210 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:20.287281 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:20.312391 1078428 cri.go:89] found id: ""
	I1210 07:54:20.312415 1078428 logs.go:282] 0 containers: []
	W1210 07:54:20.312423 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:20.312432 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.312443 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.369802 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.369838 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.387018 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.387099 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.458731 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.450398    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.451165    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.452844    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.453407    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:20.454975    6373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.458801 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:20.458828 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:20.483627 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.483662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:23.014658 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:23.025123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:23.025235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:23.060798 1078428 cri.go:89] found id: ""
	I1210 07:54:23.060872 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.060909 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:23.060934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:23.061025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:23.092890 1078428 cri.go:89] found id: ""
	I1210 07:54:23.092965 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.092987 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:23.093018 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:23.093129 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:23.122215 1078428 cri.go:89] found id: ""
	I1210 07:54:23.122290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.122314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:23.122335 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:23.122418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:23.147080 1078428 cri.go:89] found id: ""
	I1210 07:54:23.147108 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.147117 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:23.147123 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:23.147213 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.171020 1078428 cri.go:89] found id: ""
	I1210 07:54:23.171043 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.171052 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.171064 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.171120 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.195821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.195889 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.195914 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.195929 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.196016 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.219821 1078428 cri.go:89] found id: ""
	I1210 07:54:23.219901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.219926 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.219941 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:23.220025 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:23.248052 1078428 cri.go:89] found id: ""
	I1210 07:54:23.248079 1078428 logs.go:282] 0 containers: []
	W1210 07:54:23.248088 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:23.248098 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.248109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.305179 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.305215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.321081 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.321111 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.391528 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.376042    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.376445    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.378118    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.385464    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:23.387083    6485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.391553 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:23.391565 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:23.416476 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.416509 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:20.554048 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:22.554698 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:24.554805 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:25.951859 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.962115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.962185 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.986216 1078428 cri.go:89] found id: ""
	I1210 07:54:25.986286 1078428 logs.go:282] 0 containers: []
	W1210 07:54:25.986310 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.986334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.986426 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:26.011668 1078428 cri.go:89] found id: ""
	I1210 07:54:26.011696 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.011705 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:26.011712 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:26.011773 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:26.037538 1078428 cri.go:89] found id: ""
	I1210 07:54:26.037560 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.037569 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:26.037575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:26.037634 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:26.066974 1078428 cri.go:89] found id: ""
	I1210 07:54:26.066996 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.067006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:26.067013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:26.067071 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:26.100870 1078428 cri.go:89] found id: ""
	I1210 07:54:26.100892 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.100901 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:26.100907 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:26.100966 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.130861 1078428 cri.go:89] found id: ""
	I1210 07:54:26.130883 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.130891 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.130897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.130957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.156407 1078428 cri.go:89] found id: ""
	I1210 07:54:26.156429 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.156438 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.156444 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:26.156502 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:26.182081 1078428 cri.go:89] found id: ""
	I1210 07:54:26.182102 1078428 logs.go:282] 0 containers: []
	W1210 07:54:26.182110 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:26.182119 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.182133 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.239878 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.239917 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.259189 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.259219 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.328449 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.321353    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.321729    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.322876    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.323201    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:26.324632    6596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.328475 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:26.328490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:26.353246 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.353278 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.882607 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.893420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.893495 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.917577 1078428 cri.go:89] found id: ""
	I1210 07:54:28.917603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.917611 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.917617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.917677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.949094 1078428 cri.go:89] found id: ""
	I1210 07:54:28.949123 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.949132 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.949138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.949202 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.976683 1078428 cri.go:89] found id: ""
	I1210 07:54:28.976708 1078428 logs.go:282] 0 containers: []
	W1210 07:54:28.976716 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.976722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.976783 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:29.001326 1078428 cri.go:89] found id: ""
	I1210 07:54:29.001395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.001420 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:29.001440 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:29.001526 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:29.026870 1078428 cri.go:89] found id: ""
	I1210 07:54:29.026894 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.026903 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:29.026909 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:29.026992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:29.059072 1078428 cri.go:89] found id: ""
	I1210 07:54:29.059106 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.059115 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:29.059122 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:29.059190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:29.089329 1078428 cri.go:89] found id: ""
	I1210 07:54:29.089363 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.089372 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:29.089379 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:29.089446 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:29.116648 1078428 cri.go:89] found id: ""
	I1210 07:54:29.116671 1078428 logs.go:282] 0 containers: []
	W1210 07:54:29.116680 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:29.116689 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:29.116701 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:29.141429 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.141465 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.168073 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.168102 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:29.223128 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:29.223165 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:29.239118 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:29.239149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.304306 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.295477    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.296316    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.297933    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.298227    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:29.300445    6720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:27.054859 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:29.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:31.805827 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.819227 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.819305 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.852872 1078428 cri.go:89] found id: ""
	I1210 07:54:31.852901 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.852910 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.852916 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.852973 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.881145 1078428 cri.go:89] found id: ""
	I1210 07:54:31.881173 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.881182 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.881188 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.881249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.907195 1078428 cri.go:89] found id: ""
	I1210 07:54:31.907218 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.907227 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.907233 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.907292 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.931775 1078428 cri.go:89] found id: ""
	I1210 07:54:31.931799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.931808 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.931814 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.931876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.957735 1078428 cri.go:89] found id: ""
	I1210 07:54:31.957764 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.957772 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.957779 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.957837 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.982202 1078428 cri.go:89] found id: ""
	I1210 07:54:31.982285 1078428 logs.go:282] 0 containers: []
	W1210 07:54:31.982308 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.982334 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.982441 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:32.011091 1078428 cri.go:89] found id: ""
	I1210 07:54:32.011119 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.011129 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:32.011138 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:32.011205 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:32.039293 1078428 cri.go:89] found id: ""
	I1210 07:54:32.039371 1078428 logs.go:282] 0 containers: []
	W1210 07:54:32.039388 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:32.039399 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:32.039410 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:32.067441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.067482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:32.105238 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:32.105273 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.164873 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.164913 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.181394 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.181477 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.250195 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.241937    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.242446    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244284    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.244640    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:32.246223    6838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:32.054006 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:34.054566 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:34.751129 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.761490 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.761559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.785680 1078428 cri.go:89] found id: ""
	I1210 07:54:34.785702 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.785711 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.785716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.785775 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.820785 1078428 cri.go:89] found id: ""
	I1210 07:54:34.820809 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.820817 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.820823 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.820892 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.852508 1078428 cri.go:89] found id: ""
	I1210 07:54:34.852531 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.852539 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.852545 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.852604 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.879064 1078428 cri.go:89] found id: ""
	I1210 07:54:34.879095 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.879104 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.879111 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.879179 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.908815 1078428 cri.go:89] found id: ""
	I1210 07:54:34.908849 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.908858 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.908864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.908933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.939793 1078428 cri.go:89] found id: ""
	I1210 07:54:34.939820 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.939831 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.939838 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.939902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.966660 1078428 cri.go:89] found id: ""
	I1210 07:54:34.966730 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.966754 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.966775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:34.966877 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:34.997175 1078428 cri.go:89] found id: ""
	I1210 07:54:34.997202 1078428 logs.go:282] 0 containers: []
	W1210 07:54:34.997211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:34.997221 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:34.997233 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:35.054362 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:35.054504 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:35.071310 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:35.071339 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.154263 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:35.146083    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.146679    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148200    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.148710    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:35.150271    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.154285 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:35.154298 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:35.184377 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.184427 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:37.716479 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.727384 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.727475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.758151 1078428 cri.go:89] found id: ""
	I1210 07:54:37.758175 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.758183 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.758189 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.758249 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.783547 1078428 cri.go:89] found id: ""
	I1210 07:54:37.783572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.783580 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.783586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.783652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.824269 1078428 cri.go:89] found id: ""
	I1210 07:54:37.824302 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.824320 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.824326 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.824392 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.859292 1078428 cri.go:89] found id: ""
	I1210 07:54:37.859315 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.859324 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.859332 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.859391 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.887370 1078428 cri.go:89] found id: ""
	I1210 07:54:37.887395 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.887404 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.887411 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.887471 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.912568 1078428 cri.go:89] found id: ""
	I1210 07:54:37.912590 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.912599 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.912605 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.912667 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.942226 1078428 cri.go:89] found id: ""
	I1210 07:54:37.942294 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.942321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.942341 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:37.942416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:37.967116 1078428 cri.go:89] found id: ""
	I1210 07:54:37.967186 1078428 logs.go:282] 0 containers: []
	W1210 07:54:37.967211 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:37.967234 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:37.967261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.026081 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.026123 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:38.044051 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:38.044086 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:38.137383 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:38.129031    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.129785    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131296    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.131679    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:38.133181    7048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:38.137408 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:38.137420 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:38.163137 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.163174 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:36.553998 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:38.554925 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:40.692712 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.705786 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:40.705862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:40.730857 1078428 cri.go:89] found id: ""
	I1210 07:54:40.730881 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.730890 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:40.730896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:40.730956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:40.759374 1078428 cri.go:89] found id: ""
	I1210 07:54:40.759401 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.759410 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:40.759417 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:40.759481 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:40.784874 1078428 cri.go:89] found id: ""
	I1210 07:54:40.784898 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.784906 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:40.784912 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:40.784972 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:40.829615 1078428 cri.go:89] found id: ""
	I1210 07:54:40.829638 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.829648 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:40.829655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:40.829714 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:40.855514 1078428 cri.go:89] found id: ""
	I1210 07:54:40.855537 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.855547 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:40.855553 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:40.855622 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:40.880645 1078428 cri.go:89] found id: ""
	I1210 07:54:40.880674 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.880683 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:40.880699 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:40.880762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:40.908526 1078428 cri.go:89] found id: ""
	I1210 07:54:40.908553 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.908562 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:40.908568 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:40.908627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:40.933389 1078428 cri.go:89] found id: ""
	I1210 07:54:40.933417 1078428 logs.go:282] 0 containers: []
	W1210 07:54:40.933427 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:40.933466 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:40.933485 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:40.989429 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:40.989508 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:41.005657 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:41.005748 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:41.093001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:41.084101    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.084887    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.086620    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.087167    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:41.088880    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:41.093075 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:41.093107 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:41.120941 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:41.121022 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:43.650332 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:43.660886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:43.660957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:43.685546 1078428 cri.go:89] found id: ""
	I1210 07:54:43.685572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.685582 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:43.685590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:43.685652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:43.710551 1078428 cri.go:89] found id: ""
	I1210 07:54:43.710575 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.710584 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:43.710590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:43.710651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:43.735321 1078428 cri.go:89] found id: ""
	I1210 07:54:43.735347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.735357 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:43.735363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:43.735422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:43.760265 1078428 cri.go:89] found id: ""
	I1210 07:54:43.760290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.760299 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:43.760305 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:43.760371 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:43.785386 1078428 cri.go:89] found id: ""
	I1210 07:54:43.785412 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.785421 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:43.785427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:43.785491 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:43.812278 1078428 cri.go:89] found id: ""
	I1210 07:54:43.812305 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.812323 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:43.812331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:43.812390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:43.844260 1078428 cri.go:89] found id: ""
	I1210 07:54:43.844288 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.844297 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:43.844303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:43.844374 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:43.878456 1078428 cri.go:89] found id: ""
	I1210 07:54:43.878503 1078428 logs.go:282] 0 containers: []
	W1210 07:54:43.878512 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:43.878522 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:43.878533 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:43.934467 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:43.934503 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:43.951761 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:43.951790 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:44.019672 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:44.010215    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.011300    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013256    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.013896    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:44.015584    7271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:44.019739 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:44.019764 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:44.045374 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:44.045448 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:41.053999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:43.054974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:45.055139 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:46.583553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:46.594544 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:46.594614 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:46.620989 1078428 cri.go:89] found id: ""
	I1210 07:54:46.621016 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.621026 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:46.621032 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:46.621092 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:46.646885 1078428 cri.go:89] found id: ""
	I1210 07:54:46.646912 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.646921 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:46.646927 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:46.646993 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:46.671522 1078428 cri.go:89] found id: ""
	I1210 07:54:46.671545 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.671555 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:46.671561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:46.671627 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:46.697035 1078428 cri.go:89] found id: ""
	I1210 07:54:46.697057 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.697066 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:46.697076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:46.697135 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:46.721985 1078428 cri.go:89] found id: ""
	I1210 07:54:46.722008 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.722016 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:46.722023 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:46.722081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:46.750862 1078428 cri.go:89] found id: ""
	I1210 07:54:46.750885 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.750894 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:46.750900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:46.750957 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:46.775321 1078428 cri.go:89] found id: ""
	I1210 07:54:46.775347 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.775357 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:46.775363 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:46.775422 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:46.804576 1078428 cri.go:89] found id: ""
	I1210 07:54:46.804603 1078428 logs.go:282] 0 containers: []
	W1210 07:54:46.804612 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:46.804624 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:46.804635 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:46.869024 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:46.869059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:46.887039 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:46.887068 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:46.955257 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:46.946979    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.947599    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949092    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.949593    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:46.951087    7383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:46.955281 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:46.955294 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:46.981722 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:46.981766 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:54:47.553929 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:49.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:49.512895 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:49.523585 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:49.523660 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:49.553762 1078428 cri.go:89] found id: ""
	I1210 07:54:49.553799 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.553809 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:49.553815 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:49.553883 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:49.584365 1078428 cri.go:89] found id: ""
	I1210 07:54:49.584397 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.584406 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:49.584412 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:49.584473 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:49.609054 1078428 cri.go:89] found id: ""
	I1210 07:54:49.609078 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.609088 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:49.609094 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:49.609153 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:49.633506 1078428 cri.go:89] found id: ""
	I1210 07:54:49.633585 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.633612 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:49.633632 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:49.633727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:49.660681 1078428 cri.go:89] found id: ""
	I1210 07:54:49.660705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.660713 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:49.660719 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:49.660779 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:49.684429 1078428 cri.go:89] found id: ""
	I1210 07:54:49.684456 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.684465 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:49.684472 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:49.684559 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:49.708792 1078428 cri.go:89] found id: ""
	I1210 07:54:49.708825 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.708834 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:49.708841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:49.708907 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:49.733028 1078428 cri.go:89] found id: ""
	I1210 07:54:49.733061 1078428 logs.go:282] 0 containers: []
	W1210 07:54:49.733070 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:49.733080 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:49.733093 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:49.788419 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:49.788454 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:49.806199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:49.806229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:49.890193 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:49.880173    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.881024    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.882835    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.883725    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:49.885590    7497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:49.890216 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:49.890229 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:49.916164 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:49.916201 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.445192 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:52.455938 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:52.456011 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:52.483578 1078428 cri.go:89] found id: ""
	I1210 07:54:52.483607 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.483615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:52.483622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:52.483681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:52.508996 1078428 cri.go:89] found id: ""
	I1210 07:54:52.509019 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.509028 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:52.509035 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:52.509100 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:52.534163 1078428 cri.go:89] found id: ""
	I1210 07:54:52.534189 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.534197 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:52.534204 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:52.534262 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:52.559446 1078428 cri.go:89] found id: ""
	I1210 07:54:52.559468 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.559476 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:52.559482 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:52.559538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:52.585685 1078428 cri.go:89] found id: ""
	I1210 07:54:52.585705 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.585714 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:52.585720 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:52.585781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:52.610362 1078428 cri.go:89] found id: ""
	I1210 07:54:52.610387 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.610396 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:52.610429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:52.610553 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:52.639114 1078428 cri.go:89] found id: ""
	I1210 07:54:52.639140 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.639149 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:52.639155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:52.639239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:52.669083 1078428 cri.go:89] found id: ""
	I1210 07:54:52.669111 1078428 logs.go:282] 0 containers: []
	W1210 07:54:52.669120 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:52.669129 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:52.669141 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:52.684926 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:52.684953 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:52.749001 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:52.740050    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.740864    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.742595    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.743091    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:52.744746    7609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:52.749025 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:52.749037 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:52.773227 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:52.773261 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:52.804197 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:52.804276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:54:52.054720 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:54.555065 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:54:55.368759 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:55.379351 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:55.379439 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:55.403912 1078428 cri.go:89] found id: ""
	I1210 07:54:55.403937 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.403946 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:55.403953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:55.404021 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:55.432879 1078428 cri.go:89] found id: ""
	I1210 07:54:55.432902 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.432912 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:55.432918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:55.432981 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:55.457499 1078428 cri.go:89] found id: ""
	I1210 07:54:55.457528 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.457537 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:55.457546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:55.457605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:55.482796 1078428 cri.go:89] found id: ""
	I1210 07:54:55.482824 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.482833 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:55.482840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:55.482900 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:55.508135 1078428 cri.go:89] found id: ""
	I1210 07:54:55.508158 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.508167 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:55.508173 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:55.508239 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:55.532757 1078428 cri.go:89] found id: ""
	I1210 07:54:55.532828 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.532849 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:55.532856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:55.532923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:55.558383 1078428 cri.go:89] found id: ""
	I1210 07:54:55.558408 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.558431 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:55.558437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:55.558540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:55.584737 1078428 cri.go:89] found id: ""
	I1210 07:54:55.584768 1078428 logs.go:282] 0 containers: []
	W1210 07:54:55.584780 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:55.584790 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:55.584802 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:55.611899 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:55.611929 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:55.667940 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:55.667974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:55.683872 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:55.683902 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:55.753488 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:55.745404    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.746023    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.747737    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.748225    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:55.749705    7732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:55.753511 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:55.753523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.279433 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:58.290275 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:58.290358 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:58.315732 1078428 cri.go:89] found id: ""
	I1210 07:54:58.315760 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.315769 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:58.315775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:54:58.315840 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:58.354970 1078428 cri.go:89] found id: ""
	I1210 07:54:58.354993 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.355002 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:54:58.355009 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:54:58.355080 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:58.387261 1078428 cri.go:89] found id: ""
	I1210 07:54:58.387290 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.387300 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:54:58.387307 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:58.387366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:58.415659 1078428 cri.go:89] found id: ""
	I1210 07:54:58.415683 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.415691 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:58.415698 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:58.415762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:58.440257 1078428 cri.go:89] found id: ""
	I1210 07:54:58.440283 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.440292 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:58.440298 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:58.440380 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:58.465572 1078428 cri.go:89] found id: ""
	I1210 07:54:58.465598 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.465607 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:58.465614 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:58.465672 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:58.490288 1078428 cri.go:89] found id: ""
	I1210 07:54:58.490313 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.490321 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:58.490327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:54:58.490384 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:54:58.516549 1078428 cri.go:89] found id: ""
	I1210 07:54:58.516572 1078428 logs.go:282] 0 containers: []
	W1210 07:54:58.516580 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:54:58.516590 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:54:58.516601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:54:58.542195 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:54:58.542234 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:58.570592 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:58.570623 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:58.627983 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:58.628020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:58.644192 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:58.644218 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:58.708892 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:58.700163    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.700595    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.702324    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.703126    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:54:58.704702    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:54:57.053952 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:54:59.054069 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:01.209184 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:01.221080 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:01.221155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:01.250125 1078428 cri.go:89] found id: ""
	I1210 07:55:01.250154 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.250163 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:01.250178 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:01.250240 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:01.276827 1078428 cri.go:89] found id: ""
	I1210 07:55:01.276854 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.276869 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:01.276876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:01.276938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:01.311772 1078428 cri.go:89] found id: ""
	I1210 07:55:01.311808 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.311818 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:01.311824 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:01.311894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:01.344006 1078428 cri.go:89] found id: ""
	I1210 07:55:01.344042 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.344052 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:01.344059 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:01.344131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:01.370453 1078428 cri.go:89] found id: ""
	I1210 07:55:01.370508 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.370517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:01.370524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:01.370596 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:01.396784 1078428 cri.go:89] found id: ""
	I1210 07:55:01.396811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.396833 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:01.396840 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:01.396925 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:01.427026 1078428 cri.go:89] found id: ""
	I1210 07:55:01.427053 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.427064 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:01.427076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:01.427145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:01.453716 1078428 cri.go:89] found id: ""
	I1210 07:55:01.453745 1078428 logs.go:282] 0 containers: []
	W1210 07:55:01.453755 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:01.453765 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:01.453787 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:01.483021 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:01.483048 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:01.538363 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:01.538402 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:01.555879 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:01.555912 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:01.624093 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:01.614677    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.615511    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617288    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.617899    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:01.619694    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:01.624120 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:01.624136 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.151461 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:04.161982 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:04.162052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:04.187914 1078428 cri.go:89] found id: ""
	I1210 07:55:04.187940 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.187955 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:04.187961 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:04.188020 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:04.212016 1078428 cri.go:89] found id: ""
	I1210 07:55:04.212039 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.212048 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:04.212054 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:04.212113 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:04.237062 1078428 cri.go:89] found id: ""
	I1210 07:55:04.237088 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.237098 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:04.237107 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:04.237166 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:04.262844 1078428 cri.go:89] found id: ""
	I1210 07:55:04.262867 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.262876 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:04.262883 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:04.262943 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:04.288099 1078428 cri.go:89] found id: ""
	I1210 07:55:04.288125 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.288134 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:04.288140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:04.288198 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:04.315819 1078428 cri.go:89] found id: ""
	I1210 07:55:04.315846 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.315855 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:04.315861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:04.315923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:04.349897 1078428 cri.go:89] found id: ""
	I1210 07:55:04.349919 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.349928 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:04.349934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:04.349992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:04.374228 1078428 cri.go:89] found id: ""
	I1210 07:55:04.374255 1078428 logs.go:282] 0 containers: []
	W1210 07:55:04.374264 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:04.374274 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:04.374285 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:04.430541 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:04.430576 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:04.446913 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:04.446947 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:01.054690 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:03.054791 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:04.519646 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:04.510952    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.511715    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.513430    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.514116    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:04.515790    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:04.519667 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:04.519679 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:04.545056 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:04.545097 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:07.074592 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:07.085572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:07.085640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:07.111394 1078428 cri.go:89] found id: ""
	I1210 07:55:07.111418 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.111426 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:07.111432 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:07.111497 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:07.135823 1078428 cri.go:89] found id: ""
	I1210 07:55:07.135848 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.135857 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:07.135864 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:07.135923 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:07.164275 1078428 cri.go:89] found id: ""
	I1210 07:55:07.164297 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.164306 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:07.164311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:07.164385 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:07.193334 1078428 cri.go:89] found id: ""
	I1210 07:55:07.193358 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.193367 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:07.193373 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:07.193429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:07.217929 1078428 cri.go:89] found id: ""
	I1210 07:55:07.217955 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.217964 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:07.217970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:07.218032 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:07.243152 1078428 cri.go:89] found id: ""
	I1210 07:55:07.243176 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.243185 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:07.243191 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:07.243251 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:07.270888 1078428 cri.go:89] found id: ""
	I1210 07:55:07.270918 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.270927 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:07.270934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:07.270992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:07.304504 1078428 cri.go:89] found id: ""
	I1210 07:55:07.304531 1078428 logs.go:282] 0 containers: []
	W1210 07:55:07.304540 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:07.304549 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:07.304561 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:07.370744 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:07.370786 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:07.386532 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:07.386606 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:07.450870 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:07.442507    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.443254    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.444829    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.445138    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:07.446858    8169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:07.450892 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:07.450906 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:07.476441 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:07.476476 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:05.554590 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:08.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:10.006374 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:10.031408 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:10.031500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:10.072527 1078428 cri.go:89] found id: ""
	I1210 07:55:10.072558 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.072568 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:10.072575 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:10.072637 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:10.107560 1078428 cri.go:89] found id: ""
	I1210 07:55:10.107605 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.107615 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:10.107621 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:10.107694 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:10.138416 1078428 cri.go:89] found id: ""
	I1210 07:55:10.138441 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.138450 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:10.138456 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:10.138547 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:10.163271 1078428 cri.go:89] found id: ""
	I1210 07:55:10.163294 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.163303 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:10.163309 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:10.163372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:10.193549 1078428 cri.go:89] found id: ""
	I1210 07:55:10.193625 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.193637 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:10.193664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:10.193766 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:10.225083 1078428 cri.go:89] found id: ""
	I1210 07:55:10.225169 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.225182 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:10.225212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:10.225307 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:10.251042 1078428 cri.go:89] found id: ""
	I1210 07:55:10.251067 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.251082 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:10.251089 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:10.251175 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:10.275656 1078428 cri.go:89] found id: ""
	I1210 07:55:10.275681 1078428 logs.go:282] 0 containers: []
	W1210 07:55:10.275690 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:10.275699 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:10.275711 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:10.335591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:10.335628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:10.352546 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:10.352577 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:10.421057 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:10.412822    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.413267    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.414660    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.415223    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:10.416854    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:10.421081 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:10.421094 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:10.446445 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:10.446578 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:12.978285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:12.988877 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:12.988951 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:13.014715 1078428 cri.go:89] found id: ""
	I1210 07:55:13.014738 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.014746 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:13.014753 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:13.014812 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:13.039187 1078428 cri.go:89] found id: ""
	I1210 07:55:13.039217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.039226 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:13.039231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:13.039293 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:13.079663 1078428 cri.go:89] found id: ""
	I1210 07:55:13.079687 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.079696 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:13.079702 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:13.079762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:13.116097 1078428 cri.go:89] found id: ""
	I1210 07:55:13.116118 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.116127 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:13.116133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:13.116190 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:13.141856 1078428 cri.go:89] found id: ""
	I1210 07:55:13.141921 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.141946 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:13.141973 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:13.142049 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:13.166245 1078428 cri.go:89] found id: ""
	I1210 07:55:13.166318 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.166341 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:13.166361 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:13.166452 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:13.190766 1078428 cri.go:89] found id: ""
	I1210 07:55:13.190790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.190799 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:13.190805 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:13.190864 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:13.218179 1078428 cri.go:89] found id: ""
	I1210 07:55:13.218217 1078428 logs.go:282] 0 containers: []
	W1210 07:55:13.218227 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:13.218253 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:13.218270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:13.234044 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:13.234082 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:13.303134 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:13.286450    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.287225    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289050    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.289622    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:13.291338    8384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:13.303158 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:13.303170 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:13.330980 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:13.331017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:13.358836 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:13.358865 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:10.554264 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:13.054017 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:15.055138 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:15.922613 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:15.933295 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:15.933370 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:15.958341 1078428 cri.go:89] found id: ""
	I1210 07:55:15.958364 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.958373 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:15.958378 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:15.958434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:15.983285 1078428 cri.go:89] found id: ""
	I1210 07:55:15.983309 1078428 logs.go:282] 0 containers: []
	W1210 07:55:15.983324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:15.983330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:15.983387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:16.008789 1078428 cri.go:89] found id: ""
	I1210 07:55:16.008816 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.008825 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:16.008831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:16.008926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:16.035859 1078428 cri.go:89] found id: ""
	I1210 07:55:16.035931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.035946 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:16.035955 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:16.036022 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:16.068655 1078428 cri.go:89] found id: ""
	I1210 07:55:16.068688 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.068697 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:16.068704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:16.068776 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:16.106754 1078428 cri.go:89] found id: ""
	I1210 07:55:16.106780 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.106790 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:16.106796 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:16.106862 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:16.133097 1078428 cri.go:89] found id: ""
	I1210 07:55:16.133124 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.133133 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:16.133139 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:16.133207 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:16.157892 1078428 cri.go:89] found id: ""
	I1210 07:55:16.157938 1078428 logs.go:282] 0 containers: []
	W1210 07:55:16.157947 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:16.157957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:16.157970 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:16.212808 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:16.212848 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:16.228781 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:16.228813 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:16.291789 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:16.283368    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.283909    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.285648    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.286193    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:16.287783    8499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:16.291811 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:16.291823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:16.319342 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:16.319380 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:18.855190 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:18.865732 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:18.865807 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:18.889830 1078428 cri.go:89] found id: ""
	I1210 07:55:18.889855 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.889864 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:18.889871 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:18.889936 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:18.914345 1078428 cri.go:89] found id: ""
	I1210 07:55:18.914370 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.914379 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:18.914385 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:18.914444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:18.939221 1078428 cri.go:89] found id: ""
	I1210 07:55:18.939243 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.939253 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:18.939258 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:18.939316 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:18.967766 1078428 cri.go:89] found id: ""
	I1210 07:55:18.967788 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.967796 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:18.967803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:18.967867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:18.996962 1078428 cri.go:89] found id: ""
	I1210 07:55:18.996984 1078428 logs.go:282] 0 containers: []
	W1210 07:55:18.996992 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:18.996999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:18.997055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:19.023004 1078428 cri.go:89] found id: ""
	I1210 07:55:19.023031 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.023043 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:19.023052 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:19.023115 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:19.057510 1078428 cri.go:89] found id: ""
	I1210 07:55:19.057540 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.057549 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:19.057555 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:19.057611 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:19.092862 1078428 cri.go:89] found id: ""
	I1210 07:55:19.092891 1078428 logs.go:282] 0 containers: []
	W1210 07:55:19.092900 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:19.092910 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:19.092921 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:19.150597 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:19.150632 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:19.166174 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:19.166252 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:19.232235 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:19.224144    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.224636    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226223    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.226815    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:19.228275    8611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:19.232259 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:19.232272 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:19.256392 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:19.256424 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:17.554658 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:20.054087 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:21.783358 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:21.793821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:21.793896 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:21.818542 1078428 cri.go:89] found id: ""
	I1210 07:55:21.818564 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.818573 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:21.818580 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:21.818639 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:21.842392 1078428 cri.go:89] found id: ""
	I1210 07:55:21.842414 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.842423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:21.842429 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:21.842509 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:21.869909 1078428 cri.go:89] found id: ""
	I1210 07:55:21.869931 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.869940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:21.869947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:21.870009 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:21.896175 1078428 cri.go:89] found id: ""
	I1210 07:55:21.896197 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.896206 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:21.896212 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:21.896272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:21.924596 1078428 cri.go:89] found id: ""
	I1210 07:55:21.924672 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.924684 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:21.924691 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:21.924781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:21.952789 1078428 cri.go:89] found id: ""
	I1210 07:55:21.952811 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.952820 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:21.952826 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:21.952885 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:21.978579 1078428 cri.go:89] found id: ""
	I1210 07:55:21.978603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:21.978611 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:21.978617 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:21.978678 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:22.002801 1078428 cri.go:89] found id: ""
	I1210 07:55:22.002829 1078428 logs.go:282] 0 containers: []
	W1210 07:55:22.002838 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:22.002848 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:22.002866 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:22.021034 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:22.021067 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:22.101183 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:22.089820    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.090755    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092439    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.092768    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:22.094252    8722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:22.101208 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:22.101223 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:22.133557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:22.133593 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:22.160692 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:22.160719 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:22.554004 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:25.054003 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:24.716616 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:24.727463 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:24.727545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:24.752976 1078428 cri.go:89] found id: ""
	I1210 07:55:24.753005 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.753014 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:24.753021 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:24.753081 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:24.780812 1078428 cri.go:89] found id: ""
	I1210 07:55:24.780841 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.780850 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:24.780856 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:24.780913 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:24.806877 1078428 cri.go:89] found id: ""
	I1210 07:55:24.806900 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.806909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:24.806915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:24.806979 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:24.836752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.836785 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.836795 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:24.836809 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:24.836876 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:24.863110 1078428 cri.go:89] found id: ""
	I1210 07:55:24.863134 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.863143 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:24.863153 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:24.863219 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:24.888190 1078428 cri.go:89] found id: ""
	I1210 07:55:24.888214 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.888223 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:24.888230 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:24.888289 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:24.912349 1078428 cri.go:89] found id: ""
	I1210 07:55:24.912383 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.912394 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:24.912400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:24.912462 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:24.937752 1078428 cri.go:89] found id: ""
	I1210 07:55:24.937781 1078428 logs.go:282] 0 containers: []
	W1210 07:55:24.937790 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:24.937799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:24.937811 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:24.992892 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:24.992928 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:25.010173 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:25.010241 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:25.099629 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:25.089564    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.090718    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.091639    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093411    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:25.093968    8837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:25.099713 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:25.099746 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:25.131383 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:25.131423 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:27.663351 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:27.674757 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:27.674843 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:27.704367 1078428 cri.go:89] found id: ""
	I1210 07:55:27.704400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.704409 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:27.704420 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:27.704484 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:27.731740 1078428 cri.go:89] found id: ""
	I1210 07:55:27.731773 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.731783 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:27.731790 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:27.731852 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:27.761848 1078428 cri.go:89] found id: ""
	I1210 07:55:27.761871 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.761880 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:27.761886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:27.761952 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:27.789498 1078428 cri.go:89] found id: ""
	I1210 07:55:27.789527 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.789537 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:27.789543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:27.789603 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:27.815293 1078428 cri.go:89] found id: ""
	I1210 07:55:27.815320 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.815335 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:27.815342 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:27.815401 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:27.840211 1078428 cri.go:89] found id: ""
	I1210 07:55:27.840238 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.840249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:27.840256 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:27.840320 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:27.866289 1078428 cri.go:89] found id: ""
	I1210 07:55:27.866313 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.866323 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:27.866329 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:27.866388 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:27.892533 1078428 cri.go:89] found id: ""
	I1210 07:55:27.892560 1078428 logs.go:282] 0 containers: []
	W1210 07:55:27.892569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:27.892578 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:27.892590 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:27.952019 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:27.952063 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:27.969597 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:27.969631 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:28.035775 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:28.025972    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.026696    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.028884    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.029384    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:28.031424    8947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:28.035802 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:28.035816 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:28.064304 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:28.064344 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:27.054076 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:29.054524 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:30.599553 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:30.609953 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:30.610023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:30.634355 1078428 cri.go:89] found id: ""
	I1210 07:55:30.634384 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.634393 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:30.634400 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:30.634460 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:30.658396 1078428 cri.go:89] found id: ""
	I1210 07:55:30.658435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.658444 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:30.658450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:30.658540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:30.683976 1078428 cri.go:89] found id: ""
	I1210 07:55:30.684014 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.684023 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:30.684030 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:30.684099 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:30.708278 1078428 cri.go:89] found id: ""
	I1210 07:55:30.708302 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.708311 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:30.708317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:30.708376 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:30.733222 1078428 cri.go:89] found id: ""
	I1210 07:55:30.733253 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.733262 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:30.733269 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:30.733368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:30.758588 1078428 cri.go:89] found id: ""
	I1210 07:55:30.758614 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.758623 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:30.758630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:30.758700 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:30.783735 1078428 cri.go:89] found id: ""
	I1210 07:55:30.783802 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.783826 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:30.783841 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:30.783910 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:30.807833 1078428 cri.go:89] found id: ""
	I1210 07:55:30.807859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:30.807867 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:30.807876 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:30.807888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:30.872941 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:30.864693    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.865280    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867101    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.867488    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:30.869102    9052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:30.872961 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:30.872975 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:30.899140 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:30.899181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:30.926302 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:30.926333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:30.982513 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:30.982550 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.499017 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:33.509596 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:33.509669 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:33.540057 1078428 cri.go:89] found id: ""
	I1210 07:55:33.540082 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.540090 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:33.540097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:33.540160 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:33.570955 1078428 cri.go:89] found id: ""
	I1210 07:55:33.570982 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.570991 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:33.570997 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:33.571056 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:33.605930 1078428 cri.go:89] found id: ""
	I1210 07:55:33.605958 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.605968 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:33.605974 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:33.606036 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:33.634909 1078428 cri.go:89] found id: ""
	I1210 07:55:33.634932 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.634941 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:33.634947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:33.635008 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:33.659844 1078428 cri.go:89] found id: ""
	I1210 07:55:33.659912 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.659927 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:33.659935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:33.659999 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:33.684878 1078428 cri.go:89] found id: ""
	I1210 07:55:33.684902 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.684911 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:33.684918 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:33.684983 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:33.709473 1078428 cri.go:89] found id: ""
	I1210 07:55:33.709496 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.709505 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:33.709517 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:33.709580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:33.736059 1078428 cri.go:89] found id: ""
	I1210 07:55:33.736086 1078428 logs.go:282] 0 containers: []
	W1210 07:55:33.736095 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:33.736105 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:33.736117 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:33.795512 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:33.795546 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:33.811254 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:33.811282 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:33.878126 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:33.869884    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.870553    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872067    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.872608    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:33.874097    9170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:33.878148 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:33.878163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:33.904005 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:33.904041 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:31.054696 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:33.054864 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:36.431681 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:36.442446 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:36.442546 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:36.466520 1078428 cri.go:89] found id: ""
	I1210 07:55:36.466544 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.466553 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:36.466559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:36.466616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:36.497280 1078428 cri.go:89] found id: ""
	I1210 07:55:36.497307 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.497316 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:36.497322 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:36.497382 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:36.526966 1078428 cri.go:89] found id: ""
	I1210 07:55:36.526988 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.526998 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:36.527003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:36.527067 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:36.566317 1078428 cri.go:89] found id: ""
	I1210 07:55:36.566342 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.566351 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:36.566357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:36.566432 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:36.598673 1078428 cri.go:89] found id: ""
	I1210 07:55:36.598699 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.598716 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:36.598722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:36.598795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:36.638514 1078428 cri.go:89] found id: ""
	I1210 07:55:36.638537 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.638545 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:36.638551 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:36.638621 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:36.663534 1078428 cri.go:89] found id: ""
	I1210 07:55:36.663603 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.663623 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:36.663630 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:36.663715 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:36.692427 1078428 cri.go:89] found id: ""
	I1210 07:55:36.692451 1078428 logs.go:282] 0 containers: []
	W1210 07:55:36.692461 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:36.692471 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:36.692482 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:36.717965 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:36.718003 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:36.749638 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:36.749668 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:36.806519 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:36.806562 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:36.823288 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:36.823315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:36.888077 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:36.879018    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.879649    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881436    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.881956    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:36.883715    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.389725 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:39.400775 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:39.400867 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:39.426362 1078428 cri.go:89] found id: ""
	I1210 07:55:39.426389 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.426398 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:39.426407 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:39.426555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:39.455943 1078428 cri.go:89] found id: ""
	I1210 07:55:39.455969 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.455978 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:39.455984 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:39.456043 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:39.484097 1078428 cri.go:89] found id: ""
	I1210 07:55:39.484127 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.484142 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:39.484150 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:39.484209 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1210 07:55:35.554545 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:37.554652 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:40.054927 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:39.510381 1078428 cri.go:89] found id: ""
	I1210 07:55:39.510408 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.510417 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:39.510423 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:39.510508 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:39.534754 1078428 cri.go:89] found id: ""
	I1210 07:55:39.534819 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.534838 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:39.534845 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:39.534903 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:39.577369 1078428 cri.go:89] found id: ""
	I1210 07:55:39.577400 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.577409 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:39.577416 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:39.577519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:39.607302 1078428 cri.go:89] found id: ""
	I1210 07:55:39.607329 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.607348 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:39.607355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:39.607429 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:39.637231 1078428 cri.go:89] found id: ""
	I1210 07:55:39.637270 1078428 logs.go:282] 0 containers: []
	W1210 07:55:39.637282 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:39.637292 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:39.637305 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:39.694701 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:39.694745 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:39.711729 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:39.711761 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:39.777959 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:39.769450    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.770116    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771675    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.771995    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:39.773551    9399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:39.777980 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:39.777995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:39.802829 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:39.802869 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:42.336278 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:42.348869 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:42.348958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:42.376684 1078428 cri.go:89] found id: ""
	I1210 07:55:42.376751 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.376766 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:42.376774 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:42.376834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:42.401855 1078428 cri.go:89] found id: ""
	I1210 07:55:42.401881 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.401890 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:42.401897 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:42.401956 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:42.429508 1078428 cri.go:89] found id: ""
	I1210 07:55:42.429532 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.429541 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:42.429547 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:42.429605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:42.453954 1078428 cri.go:89] found id: ""
	I1210 07:55:42.453978 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.453988 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:42.453994 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:42.454052 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:42.480307 1078428 cri.go:89] found id: ""
	I1210 07:55:42.480372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.480386 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:42.480393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:42.480465 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:42.505157 1078428 cri.go:89] found id: ""
	I1210 07:55:42.505189 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.505198 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:42.505205 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:42.505272 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:42.530482 1078428 cri.go:89] found id: ""
	I1210 07:55:42.530505 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.530513 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:42.530520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:42.530580 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:42.563929 1078428 cri.go:89] found id: ""
	I1210 07:55:42.563996 1078428 logs.go:282] 0 containers: []
	W1210 07:55:42.564019 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:42.564041 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:42.564081 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:42.627607 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:42.627645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:42.644032 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:42.644059 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:42.709684 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:42.701113    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.701752    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.703455    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.704045    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:42.705675    9516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:42.709704 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:42.709717 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:42.735150 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:42.735190 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:42.554153 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:44.554944 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:45.263314 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:45.276890 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:45.276965 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:45.320051 1078428 cri.go:89] found id: ""
	I1210 07:55:45.320079 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.320089 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:45.320096 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:45.320155 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:45.357108 1078428 cri.go:89] found id: ""
	I1210 07:55:45.357143 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.357153 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:45.357159 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:45.357235 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:45.386251 1078428 cri.go:89] found id: ""
	I1210 07:55:45.386281 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.386290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:45.386296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:45.386355 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:45.411934 1078428 cri.go:89] found id: ""
	I1210 07:55:45.411960 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.411969 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:45.411975 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:45.412034 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:45.438194 1078428 cri.go:89] found id: ""
	I1210 07:55:45.438221 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.438236 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:45.438242 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:45.438299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:45.462840 1078428 cri.go:89] found id: ""
	I1210 07:55:45.462864 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.462874 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:45.462880 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:45.462938 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:45.487271 1078428 cri.go:89] found id: ""
	I1210 07:55:45.487296 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.487304 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:45.487311 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:45.487368 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:45.512829 1078428 cri.go:89] found id: ""
	I1210 07:55:45.512859 1078428 logs.go:282] 0 containers: []
	W1210 07:55:45.512868 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:45.512877 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:45.512888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:45.592088 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:45.582808    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.583533    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585213    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.585778    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:45.587458    9616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:45.592106 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:45.592119 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:45.625233 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:45.625268 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:45.653443 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:45.653475 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:45.708240 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:45.708280 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.225757 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:48.236296 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:48.236369 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:48.261289 1078428 cri.go:89] found id: ""
	I1210 07:55:48.261312 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.261320 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:48.261337 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:48.261400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:48.286722 1078428 cri.go:89] found id: ""
	I1210 07:55:48.286746 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.286755 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:48.286761 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:48.286819 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:48.322426 1078428 cri.go:89] found id: ""
	I1210 07:55:48.322453 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.322484 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:48.322507 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:48.322588 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:48.351023 1078428 cri.go:89] found id: ""
	I1210 07:55:48.351052 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.351062 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:48.351068 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:48.351126 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:48.378519 1078428 cri.go:89] found id: ""
	I1210 07:55:48.378542 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.378550 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:48.378556 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:48.378616 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:48.403355 1078428 cri.go:89] found id: ""
	I1210 07:55:48.403382 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.403392 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:48.403398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:48.403478 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:48.427960 1078428 cri.go:89] found id: ""
	I1210 07:55:48.427986 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.427995 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:48.428001 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:48.428059 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:48.451603 1078428 cri.go:89] found id: ""
	I1210 07:55:48.451670 1078428 logs.go:282] 0 containers: []
	W1210 07:55:48.451696 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:48.451714 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:48.451727 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:48.506052 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:48.506088 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:48.523423 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:48.523453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:48.594581 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:48.586197    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.587063    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.588701    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.589035    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:48.590570    9734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:48.594606 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:48.594619 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:48.622945 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:48.622982 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:55:47.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:49.054783 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:51.154448 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:51.165850 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:51.165926 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:51.191582 1078428 cri.go:89] found id: ""
	I1210 07:55:51.191607 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.191615 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:51.191622 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:51.191681 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:51.216289 1078428 cri.go:89] found id: ""
	I1210 07:55:51.216314 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.216324 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:51.216331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:51.216390 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:51.245299 1078428 cri.go:89] found id: ""
	I1210 07:55:51.245324 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.245333 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:51.245339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:51.245400 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:51.269348 1078428 cri.go:89] found id: ""
	I1210 07:55:51.269372 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.269380 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:51.269387 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:51.269443 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:51.296327 1078428 cri.go:89] found id: ""
	I1210 07:55:51.296350 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.296360 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:51.296367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:51.296433 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:51.326976 1078428 cri.go:89] found id: ""
	I1210 07:55:51.326997 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.327005 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:51.327011 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:51.327069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:51.360781 1078428 cri.go:89] found id: ""
	I1210 07:55:51.360857 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.360873 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:51.360881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:51.360960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:51.384754 1078428 cri.go:89] found id: ""
	I1210 07:55:51.384779 1078428 logs.go:282] 0 containers: []
	W1210 07:55:51.384788 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:51.384799 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:51.384810 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:51.443446 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:51.443483 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:51.461527 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:51.461559 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:51.529060 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:51.520063    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.520763    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522338    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.522821    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:51.524380    9845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:51.529096 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:51.529109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:51.561037 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:51.561354 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:54.111711 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:54.122707 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:54.122781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:54.152821 1078428 cri.go:89] found id: ""
	I1210 07:55:54.152853 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.152867 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:54.152878 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:54.152961 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:54.180559 1078428 cri.go:89] found id: ""
	I1210 07:55:54.180583 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.180591 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:54.180598 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:54.180662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:54.208251 1078428 cri.go:89] found id: ""
	I1210 07:55:54.208276 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.208285 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:54.208292 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:54.208349 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:54.233630 1078428 cri.go:89] found id: ""
	I1210 07:55:54.233655 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.233664 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:54.233670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:54.233727 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:54.258409 1078428 cri.go:89] found id: ""
	I1210 07:55:54.258435 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.258443 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:54.258450 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:54.258533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:54.282200 1078428 cri.go:89] found id: ""
	I1210 07:55:54.282234 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.282242 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:54.282248 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:54.282306 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:54.326329 1078428 cri.go:89] found id: ""
	I1210 07:55:54.326352 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.326361 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:54.326367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:54.326428 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:54.353371 1078428 cri.go:89] found id: ""
	I1210 07:55:54.353396 1078428 logs.go:282] 0 containers: []
	W1210 07:55:54.353405 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:54.353415 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:54.353429 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:55:54.412987 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:54.413025 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:54.429633 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:54.429718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:51.553930 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:53.554866 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:54.497491 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:54.488603    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.489246    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.490208    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.491739    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:54.492335    9958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:54.497530 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:54.497544 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:54.523210 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:54.523247 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.066626 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:57.077561 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:57.077642 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:55:57.102249 1078428 cri.go:89] found id: ""
	I1210 07:55:57.102273 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.102282 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:55:57.102289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:55:57.102352 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:55:57.126387 1078428 cri.go:89] found id: ""
	I1210 07:55:57.126413 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.126421 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:55:57.126427 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:55:57.126506 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:55:57.151315 1078428 cri.go:89] found id: ""
	I1210 07:55:57.151341 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.151351 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:55:57.151357 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:55:57.151417 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:55:57.180045 1078428 cri.go:89] found id: ""
	I1210 07:55:57.180074 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.180083 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:55:57.180090 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:55:57.180150 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:55:57.205199 1078428 cri.go:89] found id: ""
	I1210 07:55:57.205225 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.205233 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:55:57.205240 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:55:57.205299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:55:57.233971 1078428 cri.go:89] found id: ""
	I1210 07:55:57.233999 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.234009 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:55:57.234015 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:55:57.234078 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:55:57.258568 1078428 cri.go:89] found id: ""
	I1210 07:55:57.258594 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.258604 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:55:57.258610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:55:57.258668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:55:57.282764 1078428 cri.go:89] found id: ""
	I1210 07:55:57.282790 1078428 logs.go:282] 0 containers: []
	W1210 07:55:57.282800 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:55:57.282810 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:55:57.282823 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:55:57.299427 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:55:57.299453 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:55:57.374740 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:55:57.366367   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.367109   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.368634   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.369192   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:55:57.370847   10072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:55:57.374810 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:55:57.374851 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:55:57.400786 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:55:57.400822 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:55:57.427735 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:55:57.427767 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:55:56.054043 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:55:58.054190 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:00.055015 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:55:59.984110 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:55:59.994599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:55:59.994677 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:00.044693 1078428 cri.go:89] found id: ""
	I1210 07:56:00.044863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.044893 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:00.044928 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:00.045024 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:00.118046 1078428 cri.go:89] found id: ""
	I1210 07:56:00.118124 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.118150 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:00.118171 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:00.119167 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:00.182111 1078428 cri.go:89] found id: ""
	I1210 07:56:00.182136 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.182145 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:00.182152 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:00.182960 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:00.239971 1078428 cri.go:89] found id: ""
	I1210 07:56:00.239996 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.240006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:00.240013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:00.240085 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:00.287888 1078428 cri.go:89] found id: ""
	I1210 07:56:00.287927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.287937 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:00.287945 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:00.288014 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:00.352509 1078428 cri.go:89] found id: ""
	I1210 07:56:00.352556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.352566 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:00.352593 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:00.352712 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:00.421383 1078428 cri.go:89] found id: ""
	I1210 07:56:00.421421 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.421430 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:00.421437 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:00.421521 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:00.456737 1078428 cri.go:89] found id: ""
	I1210 07:56:00.456766 1078428 logs.go:282] 0 containers: []
	W1210 07:56:00.456776 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:00.456786 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:00.456803 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:00.539348 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:00.530832   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.531677   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533452   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.533939   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:00.535406   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:00.539370 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:00.539385 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:00.569574 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:00.569616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:00.613655 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:00.613680 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:00.671124 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:00.671163 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.187739 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:03.198133 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:03.198208 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:03.223791 1078428 cri.go:89] found id: ""
	I1210 07:56:03.223818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.223828 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:03.223834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:03.223894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:03.248620 1078428 cri.go:89] found id: ""
	I1210 07:56:03.248644 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.248653 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:03.248659 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:03.248720 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:03.273951 1078428 cri.go:89] found id: ""
	I1210 07:56:03.273975 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.273985 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:03.273991 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:03.274053 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:03.300277 1078428 cri.go:89] found id: ""
	I1210 07:56:03.300300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.300309 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:03.300315 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:03.300372 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:03.332941 1078428 cri.go:89] found id: ""
	I1210 07:56:03.332967 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.332977 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:03.332983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:03.333038 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:03.367066 1078428 cri.go:89] found id: ""
	I1210 07:56:03.367091 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.367100 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:03.367106 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:03.367164 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:03.391075 1078428 cri.go:89] found id: ""
	I1210 07:56:03.391098 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.391106 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:03.391112 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:03.391170 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:03.415021 1078428 cri.go:89] found id: ""
	I1210 07:56:03.415049 1078428 logs.go:282] 0 containers: []
	W1210 07:56:03.415058 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:03.415068 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:03.415079 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:03.440424 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:03.440470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:03.468290 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:03.468319 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:03.525567 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:03.525601 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:03.541470 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:03.541505 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:03.626098 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:03.618196   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.618603   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620212   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.620606   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:03.622172   10315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:56:02.554809 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:05.054059 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:06.126647 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:06.137759 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:06.137831 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:06.163154 1078428 cri.go:89] found id: ""
	I1210 07:56:06.163181 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.163191 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:06.163198 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:06.163265 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:06.192495 1078428 cri.go:89] found id: ""
	I1210 07:56:06.192521 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.192530 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:06.192536 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:06.192615 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:06.220976 1078428 cri.go:89] found id: ""
	I1210 07:56:06.221009 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.221017 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:06.221025 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:06.221134 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:06.246400 1078428 cri.go:89] found id: ""
	I1210 07:56:06.246427 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.246436 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:06.246442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:06.246523 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:06.272644 1078428 cri.go:89] found id: ""
	I1210 07:56:06.272667 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.272675 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:06.272681 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:06.272738 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:06.300567 1078428 cri.go:89] found id: ""
	I1210 07:56:06.300636 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.300648 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:06.300655 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:06.300726 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:06.332683 1078428 cri.go:89] found id: ""
	I1210 07:56:06.332750 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.332773 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:06.332795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:06.332881 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:06.366018 1078428 cri.go:89] found id: ""
	I1210 07:56:06.366099 1078428 logs.go:282] 0 containers: []
	W1210 07:56:06.366124 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:06.366149 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:06.366177 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:06.422922 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:06.422958 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:06.439199 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:06.439231 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:06.512644 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:06.503564   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.504265   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506193   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.506871   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:06.508506   10415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:06.512669 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:06.512682 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:06.537590 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:06.537625 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:09.085608 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:09.095930 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:09.096006 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:09.119422 1078428 cri.go:89] found id: ""
	I1210 07:56:09.119445 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.119454 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:09.119460 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:09.119518 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:09.145193 1078428 cri.go:89] found id: ""
	I1210 07:56:09.145220 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.145230 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:09.145236 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:09.145296 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:09.170538 1078428 cri.go:89] found id: ""
	I1210 07:56:09.170567 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.170576 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:09.170582 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:09.170640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:09.199713 1078428 cri.go:89] found id: ""
	I1210 07:56:09.199741 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.199749 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:09.199756 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:09.199815 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:09.224005 1078428 cri.go:89] found id: ""
	I1210 07:56:09.224037 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.224046 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:09.224053 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:09.224112 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:09.254251 1078428 cri.go:89] found id: ""
	I1210 07:56:09.254273 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.254283 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:09.254290 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:09.254348 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:09.280458 1078428 cri.go:89] found id: ""
	I1210 07:56:09.280484 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.280493 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:09.280500 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:09.280565 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:09.320912 1078428 cri.go:89] found id: ""
	I1210 07:56:09.320943 1078428 logs.go:282] 0 containers: []
	W1210 07:56:09.320952 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:09.320961 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:09.320974 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:09.386817 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:09.386854 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:09.402878 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:09.402954 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:09.472013 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:09.462698   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.463971   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.464603   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.466051   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:09.467835   10524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:09.472092 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:09.472114 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 07:56:07.054571 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:09.054701 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:09.497983 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:09.498020 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.030207 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:12.040966 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:12.041087 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:12.069314 1078428 cri.go:89] found id: ""
	I1210 07:56:12.069346 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.069356 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:12.069362 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:12.069424 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:12.096321 1078428 cri.go:89] found id: ""
	I1210 07:56:12.096400 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.096423 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:12.096438 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:12.096519 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:12.122859 1078428 cri.go:89] found id: ""
	I1210 07:56:12.122887 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.122896 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:12.122903 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:12.122985 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:12.148481 1078428 cri.go:89] found id: ""
	I1210 07:56:12.148505 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.148514 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:12.148520 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:12.148633 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:12.172954 1078428 cri.go:89] found id: ""
	I1210 07:56:12.172978 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.172995 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:12.173003 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:12.173063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:12.198414 1078428 cri.go:89] found id: ""
	I1210 07:56:12.198436 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.198446 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:12.198453 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:12.198530 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:12.227549 1078428 cri.go:89] found id: ""
	I1210 07:56:12.227576 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.227586 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:12.227592 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:12.227651 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:12.255277 1078428 cri.go:89] found id: ""
	I1210 07:56:12.255300 1078428 logs.go:282] 0 containers: []
	W1210 07:56:12.255309 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:12.255318 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:12.255330 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:12.343072 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:12.327709   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.328182   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.329582   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.330282   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:12.331929   10630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:12.343095 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:12.343109 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:12.370845 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:12.370884 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:12.401190 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:12.401217 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:12.456146 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:12.456181 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:56:11.554344 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:13.554843 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:14.972152 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:14.983046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:14.983121 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:15.031099 1078428 cri.go:89] found id: ""
	I1210 07:56:15.031183 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.031217 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:15.031260 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:15.031373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:15.061619 1078428 cri.go:89] found id: ""
	I1210 07:56:15.061646 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.061655 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:15.061662 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:15.061728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:15.088678 1078428 cri.go:89] found id: ""
	I1210 07:56:15.088701 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.088709 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:15.088716 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:15.088781 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:15.118776 1078428 cri.go:89] found id: ""
	I1210 07:56:15.118854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.118872 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:15.118881 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:15.118945 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:15.144691 1078428 cri.go:89] found id: ""
	I1210 07:56:15.144717 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.144727 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:15.144734 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:15.144799 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:15.169827 1078428 cri.go:89] found id: ""
	I1210 07:56:15.169854 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.169863 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:15.169870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:15.169927 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:15.196425 1078428 cri.go:89] found id: ""
	I1210 07:56:15.196459 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.196468 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:15.196474 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:15.196533 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:15.221736 1078428 cri.go:89] found id: ""
	I1210 07:56:15.221763 1078428 logs.go:282] 0 containers: []
	W1210 07:56:15.221772 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:15.221782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:15.221794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:15.237860 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:15.237890 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:15.309823 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:15.299280   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302014   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.302821   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.303726   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:15.304513   10749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:15.309847 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:15.309860 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:15.342939 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:15.342990 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:15.376812 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:15.376839 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:17.934235 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:17.945317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:17.945396 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:17.971659 1078428 cri.go:89] found id: ""
	I1210 07:56:17.971685 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.971694 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:17.971700 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:17.971753 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:17.996434 1078428 cri.go:89] found id: ""
	I1210 07:56:17.996476 1078428 logs.go:282] 0 containers: []
	W1210 07:56:17.996488 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:17.996495 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:17.996560 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:18.024303 1078428 cri.go:89] found id: ""
	I1210 07:56:18.024338 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.024347 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:18.024354 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:18.024416 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:18.049317 1078428 cri.go:89] found id: ""
	I1210 07:56:18.049344 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.049353 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:18.049360 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:18.049421 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:18.079586 1078428 cri.go:89] found id: ""
	I1210 07:56:18.079611 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.079620 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:18.079627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:18.079686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:18.108486 1078428 cri.go:89] found id: ""
	I1210 07:56:18.108511 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.108519 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:18.108526 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:18.108601 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:18.137645 1078428 cri.go:89] found id: ""
	I1210 07:56:18.137671 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.137680 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:18.137686 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:18.137767 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:18.161838 1078428 cri.go:89] found id: ""
	I1210 07:56:18.161863 1078428 logs.go:282] 0 containers: []
	W1210 07:56:18.161874 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:18.161883 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:18.161916 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:18.235505 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:18.227479   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.228109   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.229636   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.230246   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:18.231753   10857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:18.235526 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:18.235539 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:18.260551 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:18.260589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:18.288267 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:18.288296 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:18.349132 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:18.349215 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:56:16.054030 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:18.054084 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:20.868569 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:20.879574 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:20.879649 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:20.904201 1078428 cri.go:89] found id: ""
	I1210 07:56:20.904226 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.904235 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:20.904241 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:20.904299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:20.929396 1078428 cri.go:89] found id: ""
	I1210 07:56:20.929423 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.929432 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:20.929439 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:20.929514 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:20.954953 1078428 cri.go:89] found id: ""
	I1210 07:56:20.954984 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.954993 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:20.954999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:20.955058 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:20.978741 1078428 cri.go:89] found id: ""
	I1210 07:56:20.978767 1078428 logs.go:282] 0 containers: []
	W1210 07:56:20.978776 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:20.978782 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:20.978841 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:21.003286 1078428 cri.go:89] found id: ""
	I1210 07:56:21.003313 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.003323 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:21.003330 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:21.003402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:21.034505 1078428 cri.go:89] found id: ""
	I1210 07:56:21.034527 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.034536 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:21.034543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:21.034605 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:21.058861 1078428 cri.go:89] found id: ""
	I1210 07:56:21.058885 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.058894 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:21.058900 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:21.058958 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:21.082740 1078428 cri.go:89] found id: ""
	I1210 07:56:21.082764 1078428 logs.go:282] 0 containers: []
	W1210 07:56:21.082773 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:21.082782 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:21.082794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:21.098247 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:21.098276 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:21.161962 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:21.153624   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.154239   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.155892   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.156389   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:21.158100   10975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:21.161982 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:21.161995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:21.187272 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:21.187314 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:21.214180 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:21.214213 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:23.769450 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:23.780372 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:23.780505 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:23.817607 1078428 cri.go:89] found id: ""
	I1210 07:56:23.817631 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.817641 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:23.817648 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:23.817709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:23.848903 1078428 cri.go:89] found id: ""
	I1210 07:56:23.848927 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.848949 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:23.848960 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:23.849023 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:23.877281 1078428 cri.go:89] found id: ""
	I1210 07:56:23.877305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.877314 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:23.877320 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:23.877387 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:23.903972 1078428 cri.go:89] found id: ""
	I1210 07:56:23.903997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.904006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:23.904013 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:23.904089 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:23.929481 1078428 cri.go:89] found id: ""
	I1210 07:56:23.929508 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.929517 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:23.929525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:23.929586 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:23.954626 1078428 cri.go:89] found id: ""
	I1210 07:56:23.954665 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.954676 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:23.954683 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:23.954785 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:23.980069 1078428 cri.go:89] found id: ""
	I1210 07:56:23.980102 1078428 logs.go:282] 0 containers: []
	W1210 07:56:23.980111 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:23.980117 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:23.980176 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:24.005963 1078428 cri.go:89] found id: ""
	I1210 07:56:24.005987 1078428 logs.go:282] 0 containers: []
	W1210 07:56:24.005996 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:24.006006 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:24.006017 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:24.036028 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:24.036065 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:24.065541 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:24.065571 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:24.126584 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:24.126630 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:24.143358 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:24.143391 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:24.208974 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:24.200724   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.201305   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.202949   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.203979   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:24.204653   11104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:56:20.554242 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:22.554679 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:25.054999 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:26.710619 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:26.721267 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:26.721343 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:26.746073 1078428 cri.go:89] found id: ""
	I1210 07:56:26.746100 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.746109 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:26.746115 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:26.746178 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:26.772432 1078428 cri.go:89] found id: ""
	I1210 07:56:26.772456 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.772472 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:26.772479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:26.772538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:26.809928 1078428 cri.go:89] found id: ""
	I1210 07:56:26.809954 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.809964 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:26.809970 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:26.810026 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:26.837500 1078428 cri.go:89] found id: ""
	I1210 07:56:26.837522 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.837531 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:26.837538 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:26.837592 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:26.864667 1078428 cri.go:89] found id: ""
	I1210 07:56:26.864693 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.864702 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:26.864708 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:26.864768 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:26.892330 1078428 cri.go:89] found id: ""
	I1210 07:56:26.892359 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.892368 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:26.892374 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:26.892457 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:26.916781 1078428 cri.go:89] found id: ""
	I1210 07:56:26.916807 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.916815 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:26.916822 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:26.916902 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:26.945103 1078428 cri.go:89] found id: ""
	I1210 07:56:26.945128 1078428 logs.go:282] 0 containers: []
	W1210 07:56:26.945137 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:26.945147 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:26.945178 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:27.001893 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:27.001933 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:27.020119 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:27.020149 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:27.092626 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:27.084844   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.085466   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.086971   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.087472   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:27.088931   11204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:27.092690 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:27.092712 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:27.118838 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:27.118873 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:27.554852 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:29.554968 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:29.646997 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:29.659058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:29.659139 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:29.684417 1078428 cri.go:89] found id: ""
	I1210 07:56:29.684442 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.684452 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:29.684459 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:29.684532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:29.713716 1078428 cri.go:89] found id: ""
	I1210 07:56:29.713747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.713756 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:29.713762 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:29.713829 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:29.742671 1078428 cri.go:89] found id: ""
	I1210 07:56:29.742747 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.742761 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:29.742769 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:29.742834 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:29.767461 1078428 cri.go:89] found id: ""
	I1210 07:56:29.767488 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.767497 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:29.767503 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:29.767590 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:29.791629 1078428 cri.go:89] found id: ""
	I1210 07:56:29.791655 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.791664 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:29.791670 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:29.791728 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:29.822213 1078428 cri.go:89] found id: ""
	I1210 07:56:29.822240 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.822249 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:29.822255 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:29.822317 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:29.854606 1078428 cri.go:89] found id: ""
	I1210 07:56:29.854633 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.854643 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:29.854649 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:29.854709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:29.880033 1078428 cri.go:89] found id: ""
	I1210 07:56:29.880059 1078428 logs.go:282] 0 containers: []
	W1210 07:56:29.880068 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:29.880077 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:29.880090 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:29.948475 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:29.940551   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.941416   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.942953   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.943415   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:29.944654   11311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:29.948498 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:29.948512 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:29.974136 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:29.974171 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:30.013967 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:30.014008 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:30.097748 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:30.097788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.617610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:32.628661 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:32.628735 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:32.652564 1078428 cri.go:89] found id: ""
	I1210 07:56:32.652594 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.652603 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:32.652610 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:32.652668 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:32.680277 1078428 cri.go:89] found id: ""
	I1210 07:56:32.680302 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.680310 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:32.680317 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:32.680379 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:32.704183 1078428 cri.go:89] found id: ""
	I1210 07:56:32.704207 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.704216 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:32.704222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:32.704285 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:32.729141 1078428 cri.go:89] found id: ""
	I1210 07:56:32.729165 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.729174 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:32.729180 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:32.729237 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:32.753460 1078428 cri.go:89] found id: ""
	I1210 07:56:32.753482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.753490 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:32.753496 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:32.753562 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:32.781036 1078428 cri.go:89] found id: ""
	I1210 07:56:32.781061 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.781069 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:32.781076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:32.781131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:32.816565 1078428 cri.go:89] found id: ""
	I1210 07:56:32.816586 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.816594 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:32.816599 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:32.816655 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:32.848807 1078428 cri.go:89] found id: ""
	I1210 07:56:32.848832 1078428 logs.go:282] 0 containers: []
	W1210 07:56:32.848841 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:32.848849 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:32.848861 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:32.908343 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:32.908379 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:32.924367 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:32.924396 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:32.994542 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:32.985949   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.986567   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988414   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.988968   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:32.990531   11429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:32.994565 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:32.994581 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:33.024802 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:33.024842 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:32.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:34.554950 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:35.557491 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:35.568723 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:35.568795 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:35.601157 1078428 cri.go:89] found id: ""
	I1210 07:56:35.601184 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.601193 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:35.601200 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:35.601260 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:35.628459 1078428 cri.go:89] found id: ""
	I1210 07:56:35.628494 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.628503 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:35.628509 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:35.628570 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:35.656310 1078428 cri.go:89] found id: ""
	I1210 07:56:35.656332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.656342 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:35.656348 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:35.656404 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:35.680954 1078428 cri.go:89] found id: ""
	I1210 07:56:35.680980 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.680992 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:35.680998 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:35.681055 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:35.708548 1078428 cri.go:89] found id: ""
	I1210 07:56:35.708575 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.708584 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:35.708590 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:35.708648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:35.736013 1078428 cri.go:89] found id: ""
	I1210 07:56:35.736040 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.736049 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:35.736056 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:35.736124 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:35.760465 1078428 cri.go:89] found id: ""
	I1210 07:56:35.760495 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.760504 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:35.760511 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:35.760574 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:35.785429 1078428 cri.go:89] found id: ""
	I1210 07:56:35.785451 1078428 logs.go:282] 0 containers: []
	W1210 07:56:35.785460 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:35.785469 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:35.785481 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:35.871280 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:35.862207   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.863090   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.864745   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.865288   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:35.866984   11531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:35.871302 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:35.871315 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:35.897087 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:35.897124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:35.925107 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:35.925134 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:35.981188 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:35.981270 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.499048 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:38.509835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:38.509908 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:38.534615 1078428 cri.go:89] found id: ""
	I1210 07:56:38.534637 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.534645 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:38.534652 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:38.534708 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:38.576309 1078428 cri.go:89] found id: ""
	I1210 07:56:38.576332 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.576341 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:38.576347 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:38.576407 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:38.611259 1078428 cri.go:89] found id: ""
	I1210 07:56:38.611281 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.611290 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:38.611297 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:38.611357 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:38.637583 1078428 cri.go:89] found id: ""
	I1210 07:56:38.637612 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.637621 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:38.637627 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:38.637686 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:38.662187 1078428 cri.go:89] found id: ""
	I1210 07:56:38.662267 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.662290 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:38.662310 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:38.662402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:38.686838 1078428 cri.go:89] found id: ""
	I1210 07:56:38.686861 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.686869 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:38.686876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:38.686933 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:38.710788 1078428 cri.go:89] found id: ""
	I1210 07:56:38.710815 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.710824 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:38.710831 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:38.710930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:38.736531 1078428 cri.go:89] found id: ""
	I1210 07:56:38.736556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:38.736565 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:38.736575 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:38.736589 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:38.752335 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:38.752364 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:38.826607 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:38.813602   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.818748   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.819180   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.820763   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:38.821332   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:38.826675 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:38.826688 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:38.854204 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:38.854240 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:38.883619 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:38.883647 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:37.054712 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:39.554110 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:41.439316 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:41.450451 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:41.450532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:41.476998 1078428 cri.go:89] found id: ""
	I1210 07:56:41.477022 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.477030 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:41.477036 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:41.477096 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:41.502043 1078428 cri.go:89] found id: ""
	I1210 07:56:41.502069 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.502078 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:41.502084 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:41.502145 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:41.526905 1078428 cri.go:89] found id: ""
	I1210 07:56:41.526931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.526940 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:41.526947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:41.527007 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:41.558750 1078428 cri.go:89] found id: ""
	I1210 07:56:41.558779 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.558788 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:41.558795 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:41.558851 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:41.596637 1078428 cri.go:89] found id: ""
	I1210 07:56:41.596664 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.596674 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:41.596680 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:41.596742 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:41.622316 1078428 cri.go:89] found id: ""
	I1210 07:56:41.622340 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.622348 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:41.622355 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:41.622418 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:41.648410 1078428 cri.go:89] found id: ""
	I1210 07:56:41.648482 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.648511 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:41.648518 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:41.648581 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:41.680776 1078428 cri.go:89] found id: ""
	I1210 07:56:41.680802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:41.680811 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:41.680820 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:41.680832 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:41.708185 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:41.708211 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:41.767625 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:41.767662 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:41.784949 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:41.784980 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:41.871610 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:41.863026   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.863723   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865304   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.865872   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:41.867591   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:41.871632 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:41.871645 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.398611 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:44.408733 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:44.408806 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:44.432507 1078428 cri.go:89] found id: ""
	I1210 07:56:44.432531 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.432540 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:44.432546 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:44.432607 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:44.457597 1078428 cri.go:89] found id: ""
	I1210 07:56:44.457622 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.457631 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:44.457637 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:44.457697 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:44.485123 1078428 cri.go:89] found id: ""
	I1210 07:56:44.485149 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.485158 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:44.485165 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:44.485228 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1210 07:56:42.054022 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:44.054891 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:44.510813 1078428 cri.go:89] found id: ""
	I1210 07:56:44.510848 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.510857 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:44.510870 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:44.510929 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:44.534504 1078428 cri.go:89] found id: ""
	I1210 07:56:44.534528 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.534537 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:44.534543 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:44.534600 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:44.574866 1078428 cri.go:89] found id: ""
	I1210 07:56:44.574940 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.574962 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:44.574983 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:44.575074 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:44.605450 1078428 cri.go:89] found id: ""
	I1210 07:56:44.605523 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.605546 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:44.605566 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:44.605652 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:44.633965 1078428 cri.go:89] found id: ""
	I1210 07:56:44.634039 1078428 logs.go:282] 0 containers: []
	W1210 07:56:44.634064 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:44.634087 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:44.634124 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:44.692591 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:44.692628 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:44.708687 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:44.708718 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:44.774532 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:44.765883   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.766513   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.768058   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.769010   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:44.770731   11876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:44.774581 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:44.774594 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:44.801145 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:44.801235 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.336116 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:47.346722 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:47.346793 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:47.370822 1078428 cri.go:89] found id: ""
	I1210 07:56:47.370860 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.370870 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:47.370876 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:47.370948 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:47.401111 1078428 cri.go:89] found id: ""
	I1210 07:56:47.401140 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.401149 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:47.401155 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:47.401212 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:47.430968 1078428 cri.go:89] found id: ""
	I1210 07:56:47.430991 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.430999 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:47.431004 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:47.431063 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:47.455626 1078428 cri.go:89] found id: ""
	I1210 07:56:47.455650 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.455659 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:47.455665 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:47.455722 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:47.479857 1078428 cri.go:89] found id: ""
	I1210 07:56:47.479882 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.479890 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:47.479896 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:47.479959 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:47.504271 1078428 cri.go:89] found id: ""
	I1210 07:56:47.504294 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.504305 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:47.504312 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:47.504373 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:47.532761 1078428 cri.go:89] found id: ""
	I1210 07:56:47.532837 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.532863 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:47.532886 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:47.532990 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:47.570086 1078428 cri.go:89] found id: ""
	I1210 07:56:47.570108 1078428 logs.go:282] 0 containers: []
	W1210 07:56:47.570116 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:47.570125 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:47.570137 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:47.586049 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:47.586078 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:47.655434 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:47.647357   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.647927   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.649588   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.650042   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:47.651608   11989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:47.655455 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:47.655470 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:47.680757 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:47.680794 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:47.708957 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:47.708986 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:46.554013 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:49.054042 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:50.265598 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:50.276268 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:50.276342 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:50.301484 1078428 cri.go:89] found id: ""
	I1210 07:56:50.301507 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.301515 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:50.301521 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:50.301582 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:50.327230 1078428 cri.go:89] found id: ""
	I1210 07:56:50.327255 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.327264 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:50.327270 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:50.327331 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:50.352201 1078428 cri.go:89] found id: ""
	I1210 07:56:50.352224 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.352233 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:50.352239 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:50.352299 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:50.377546 1078428 cri.go:89] found id: ""
	I1210 07:56:50.377571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.377580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:50.377586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:50.377647 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:50.403517 1078428 cri.go:89] found id: ""
	I1210 07:56:50.403544 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.403552 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:50.403559 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:50.403635 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:50.432794 1078428 cri.go:89] found id: ""
	I1210 07:56:50.432820 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.432829 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:50.432835 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:50.432924 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:50.456905 1078428 cri.go:89] found id: ""
	I1210 07:56:50.456931 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.456941 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:50.456947 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:50.457013 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:50.488840 1078428 cri.go:89] found id: ""
	I1210 07:56:50.488908 1078428 logs.go:282] 0 containers: []
	W1210 07:56:50.488932 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:50.488949 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:50.488962 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:50.547966 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:50.548000 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:50.565711 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:50.565789 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:50.652776 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:50.644502   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.645087   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.646685   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.647075   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:50.648875   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:50.652800 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:50.652815 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:50.678909 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:50.678950 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.207825 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:53.218403 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:53.218500 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:53.244529 1078428 cri.go:89] found id: ""
	I1210 07:56:53.244556 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.244565 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:53.244572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:53.244629 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:53.270382 1078428 cri.go:89] found id: ""
	I1210 07:56:53.270408 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.270418 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:53.270424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:53.270517 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:53.295316 1078428 cri.go:89] found id: ""
	I1210 07:56:53.295342 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.295352 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:53.295358 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:53.295425 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:53.324326 1078428 cri.go:89] found id: ""
	I1210 07:56:53.324351 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.324360 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:53.324367 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:53.324444 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:53.349399 1078428 cri.go:89] found id: ""
	I1210 07:56:53.349425 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.349435 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:53.349441 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:53.349555 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:53.374280 1078428 cri.go:89] found id: ""
	I1210 07:56:53.374305 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.374314 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:53.374321 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:53.374431 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:53.398894 1078428 cri.go:89] found id: ""
	I1210 07:56:53.398920 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.398929 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:53.398935 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:53.398992 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:53.423872 1078428 cri.go:89] found id: ""
	I1210 07:56:53.423897 1078428 logs.go:282] 0 containers: []
	W1210 07:56:53.423907 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:53.423920 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:53.423936 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:53.440226 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:53.440258 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:53.503949 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:53.495490   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.495917   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.497631   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.498044   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:53.499663   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:53.503975 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:53.503989 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:53.530691 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:53.530737 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:53.577761 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:53.577835 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:56:51.054085 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:53.054150 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:56:56.142597 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:56.153164 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:56.153234 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:56.177358 1078428 cri.go:89] found id: ""
	I1210 07:56:56.177391 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.177400 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:56.177406 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:56.177475 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:56.202573 1078428 cri.go:89] found id: ""
	I1210 07:56:56.202641 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.202657 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:56.202664 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:56.202725 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:56.226758 1078428 cri.go:89] found id: ""
	I1210 07:56:56.226785 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.226795 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:56.226802 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:56.226891 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:56.250286 1078428 cri.go:89] found id: ""
	I1210 07:56:56.250310 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.250319 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:56.250327 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:56.250381 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:56.276297 1078428 cri.go:89] found id: ""
	I1210 07:56:56.276375 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.276391 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:56.276398 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:56.276458 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:56.301334 1078428 cri.go:89] found id: ""
	I1210 07:56:56.301366 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.301375 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:56.301382 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:56.301450 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:56.325521 1078428 cri.go:89] found id: ""
	I1210 07:56:56.325557 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.325566 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:56.325572 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:56.325640 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:56.351180 1078428 cri.go:89] found id: ""
	I1210 07:56:56.351219 1078428 logs.go:282] 0 containers: []
	W1210 07:56:56.351228 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:56.351237 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:56.351249 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:56.406556 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:56.406592 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:56.422756 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:56.422788 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:56.486945 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:56.478739   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.479484   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481033   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.481354   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:56.483059   12330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:56.486967 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:56.486983 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:56.512575 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:56.512616 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:56:59.046618 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:56:59.059092 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:56:59.059161 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:56:59.089542 1078428 cri.go:89] found id: ""
	I1210 07:56:59.089571 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.089580 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:56:59.089586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:56:59.089648 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:56:59.118669 1078428 cri.go:89] found id: ""
	I1210 07:56:59.118691 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.118700 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:56:59.118706 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:56:59.118770 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:56:59.143775 1078428 cri.go:89] found id: ""
	I1210 07:56:59.143802 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.143814 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:56:59.143821 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:56:59.143880 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:56:59.167972 1078428 cri.go:89] found id: ""
	I1210 07:56:59.167997 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.168006 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:56:59.168012 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:56:59.168088 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:56:59.195291 1078428 cri.go:89] found id: ""
	I1210 07:56:59.195316 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.195325 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:56:59.195331 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:56:59.195434 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:56:59.219900 1078428 cri.go:89] found id: ""
	I1210 07:56:59.219928 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.219937 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:56:59.219943 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:56:59.220002 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:56:59.252792 1078428 cri.go:89] found id: ""
	I1210 07:56:59.252818 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.252827 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:56:59.252834 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:56:59.252894 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:56:59.281785 1078428 cri.go:89] found id: ""
	I1210 07:56:59.281808 1078428 logs.go:282] 0 containers: []
	W1210 07:56:59.281823 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:56:59.281832 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:56:59.281843 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:56:59.337457 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:56:59.337496 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:56:59.353622 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:56:59.353650 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:56:59.423704 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:56:59.414855   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.415874   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.416954   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418065   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:56:59.418769   12442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:56:59.423725 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:56:59.423739 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:56:59.449814 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:56:59.449853 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:56:55.554362 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:57.554656 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:56:59.554765 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:01.979246 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:01.990999 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:01.991072 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:02.022990 1078428 cri.go:89] found id: ""
	I1210 07:57:02.023028 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.023038 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:02.023046 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:02.023109 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:02.050830 1078428 cri.go:89] found id: ""
	I1210 07:57:02.050857 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.050867 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:02.050873 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:02.050930 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:02.080878 1078428 cri.go:89] found id: ""
	I1210 07:57:02.080901 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.080909 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:02.080915 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:02.080974 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:02.111744 1078428 cri.go:89] found id: ""
	I1210 07:57:02.111766 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.111774 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:02.111780 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:02.111838 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:02.139560 1078428 cri.go:89] found id: ""
	I1210 07:57:02.139587 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.139596 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:02.139602 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:02.139662 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:02.164249 1078428 cri.go:89] found id: ""
	I1210 07:57:02.164274 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.164282 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:02.164289 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:02.164347 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:02.191165 1078428 cri.go:89] found id: ""
	I1210 07:57:02.191187 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.191196 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:02.191202 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:02.191280 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:02.220305 1078428 cri.go:89] found id: ""
	I1210 07:57:02.220371 1078428 logs.go:282] 0 containers: []
	W1210 07:57:02.220395 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:02.220419 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:02.220447 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:02.275451 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:02.275490 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:02.291722 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:02.291797 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:02.357294 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:02.349371   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.349907   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351488   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.351933   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:02.353434   12554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:02.357319 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:02.357333 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:02.382557 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:02.382591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:57:02.053955 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:04.553976 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:04.913285 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:04.924140 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:04.924214 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:04.949752 1078428 cri.go:89] found id: ""
	I1210 07:57:04.949787 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.949796 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:04.949803 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:04.949869 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:04.974850 1078428 cri.go:89] found id: ""
	I1210 07:57:04.974876 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.974886 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:04.974892 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:04.974949 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:04.999787 1078428 cri.go:89] found id: ""
	I1210 07:57:04.999853 1078428 logs.go:282] 0 containers: []
	W1210 07:57:04.999868 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:04.999875 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:04.999937 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:05.031544 1078428 cri.go:89] found id: ""
	I1210 07:57:05.031570 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.031580 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:05.031586 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:05.031644 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:05.068235 1078428 cri.go:89] found id: ""
	I1210 07:57:05.068262 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.068272 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:05.068278 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:05.068337 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:05.101435 1078428 cri.go:89] found id: ""
	I1210 07:57:05.101462 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.101472 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:05.101479 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:05.101545 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:05.129616 1078428 cri.go:89] found id: ""
	I1210 07:57:05.129640 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.129648 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:05.129654 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:05.129733 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:05.155520 1078428 cri.go:89] found id: ""
	I1210 07:57:05.155544 1078428 logs.go:282] 0 containers: []
	W1210 07:57:05.155553 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:05.155563 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:05.155575 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:05.212400 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:05.212436 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:05.228606 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:05.228643 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:05.292822 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:05.284723   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.285318   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.286836   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.287339   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:05.288802   12665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:05.292845 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:05.292858 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:05.318694 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:05.318732 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:07.846610 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:07.857861 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:07.857939 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:07.885093 1078428 cri.go:89] found id: ""
	I1210 07:57:07.885115 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.885124 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:07.885130 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:07.885192 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:07.909018 1078428 cri.go:89] found id: ""
	I1210 07:57:07.909043 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.909052 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:07.909058 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:07.909116 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:07.935262 1078428 cri.go:89] found id: ""
	I1210 07:57:07.935288 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.935298 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:07.935303 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:07.935366 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:07.959939 1078428 cri.go:89] found id: ""
	I1210 07:57:07.959965 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.959974 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:07.959981 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:07.960039 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:07.991314 1078428 cri.go:89] found id: ""
	I1210 07:57:07.991341 1078428 logs.go:282] 0 containers: []
	W1210 07:57:07.991350 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:07.991356 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:07.991415 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:08.020601 1078428 cri.go:89] found id: ""
	I1210 07:57:08.020628 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.020638 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:08.020645 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:08.020709 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:08.049221 1078428 cri.go:89] found id: ""
	I1210 07:57:08.049250 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.049259 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:08.049265 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:08.049323 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:08.078839 1078428 cri.go:89] found id: ""
	I1210 07:57:08.078862 1078428 logs.go:282] 0 containers: []
	W1210 07:57:08.078870 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:08.078883 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:08.078896 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:08.098811 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:08.098888 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:08.168958 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:08.160514   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.160982   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.162642   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.163069   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:08.164788   12776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:08.169024 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:08.169046 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:08.195261 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:08.195297 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:08.222093 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:08.222121 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:57:06.554902 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:09.054181 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:10.778721 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:10.791524 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:10.791597 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:10.819485 1078428 cri.go:89] found id: ""
	I1210 07:57:10.819507 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.819519 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:10.819525 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:10.819585 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:10.872623 1078428 cri.go:89] found id: ""
	I1210 07:57:10.872646 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.872654 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:10.872660 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:10.872724 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:10.898357 1078428 cri.go:89] found id: ""
	I1210 07:57:10.898378 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.898387 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:10.898393 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:10.898448 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:10.923976 1078428 cri.go:89] found id: ""
	I1210 07:57:10.924000 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.924009 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:10.924016 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:10.924095 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:10.952951 1078428 cri.go:89] found id: ""
	I1210 07:57:10.952986 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.952996 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:10.953002 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:10.953069 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:10.977761 1078428 cri.go:89] found id: ""
	I1210 07:57:10.977793 1078428 logs.go:282] 0 containers: []
	W1210 07:57:10.977802 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:10.977808 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:10.977878 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:11.009022 1078428 cri.go:89] found id: ""
	I1210 07:57:11.009052 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.009069 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:11.009076 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:11.009147 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:11.034444 1078428 cri.go:89] found id: ""
	I1210 07:57:11.034493 1078428 logs.go:282] 0 containers: []
	W1210 07:57:11.034502 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:11.034512 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:11.034523 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:11.098059 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:11.098096 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:11.117339 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:11.117370 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:11.190897 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:11.182016   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.182889   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184458   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.184955   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:11.186522   12893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:11.190919 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:11.190932 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:11.215685 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:11.215722 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:13.744333 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:13.754962 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:13.755031 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:13.783588 1078428 cri.go:89] found id: ""
	I1210 07:57:13.783611 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.783619 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:13.783625 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:13.783683 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:13.819100 1078428 cri.go:89] found id: ""
	I1210 07:57:13.819122 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.819130 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:13.819136 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:13.819193 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:13.860234 1078428 cri.go:89] found id: ""
	I1210 07:57:13.860257 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.860266 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:13.860272 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:13.860332 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:13.886331 1078428 cri.go:89] found id: ""
	I1210 07:57:13.886406 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.886418 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:13.886424 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:13.886540 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:13.911054 1078428 cri.go:89] found id: ""
	I1210 07:57:13.911080 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.911089 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:13.911097 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:13.911172 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:13.934983 1078428 cri.go:89] found id: ""
	I1210 07:57:13.935051 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.935066 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:13.935073 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:13.935131 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:13.960415 1078428 cri.go:89] found id: ""
	I1210 07:57:13.960440 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.960449 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:13.960455 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:13.960538 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:13.985917 1078428 cri.go:89] found id: ""
	I1210 07:57:13.985964 1078428 logs.go:282] 0 containers: []
	W1210 07:57:13.985974 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:13.985983 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:13.985995 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:14.046091 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:14.046336 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:14.068485 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:14.068513 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:14.145212 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:14.136671   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.137530   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139238   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.139533   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:14.141026   13008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:14.145235 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:14.145248 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:14.170375 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:14.170409 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:57:11.553974 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:13.554028 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:15.554374 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:57:17.554945 1077343 node_ready.go:55] error getting node "no-preload-587009" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-587009": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:57:19.054633 1077343 node_ready.go:38] duration metric: took 6m0.001135979s for node "no-preload-587009" to be "Ready" ...
	I1210 07:57:19.057729 1077343 out.go:203] 
	W1210 07:57:19.060573 1077343 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:57:19.060592 1077343 out.go:285] * 
	W1210 07:57:19.062943 1077343 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:57:19.065570 1077343 out.go:203] 
	I1210 07:57:16.699528 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:16.710231 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:16.710301 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:16.734299 1078428 cri.go:89] found id: ""
	I1210 07:57:16.734325 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.734333 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:16.734339 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:16.734402 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:16.759890 1078428 cri.go:89] found id: ""
	I1210 07:57:16.759916 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.759925 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:16.759934 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:16.760017 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:16.788155 1078428 cri.go:89] found id: ""
	I1210 07:57:16.788181 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.788191 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:16.788197 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:16.788256 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:16.817801 1078428 cri.go:89] found id: ""
	I1210 07:57:16.817828 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.817837 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:16.817844 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:16.817904 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:16.845878 1078428 cri.go:89] found id: ""
	I1210 07:57:16.845905 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.845913 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:16.845919 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:16.845975 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:16.873613 1078428 cri.go:89] found id: ""
	I1210 07:57:16.873641 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.873651 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:16.873658 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:16.873719 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:16.898666 1078428 cri.go:89] found id: ""
	I1210 07:57:16.898689 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.898698 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:16.898704 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:16.898762 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:16.922533 1078428 cri.go:89] found id: ""
	I1210 07:57:16.922560 1078428 logs.go:282] 0 containers: []
	W1210 07:57:16.922569 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:16.922579 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:16.922591 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:16.948298 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:16.948341 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:16.976671 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:16.976699 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:17.033642 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:17.033681 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:17.052529 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:17.052568 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:17.131312 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:17.121533   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.122382   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.123692   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.124090   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:17.127051   13134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:19.632225 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:19.644243 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:57:19.644343 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:57:19.682502 1078428 cri.go:89] found id: ""
	I1210 07:57:19.682536 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.682546 1078428 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:57:19.682553 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:57:19.682615 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:57:19.709431 1078428 cri.go:89] found id: ""
	I1210 07:57:19.709455 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.709464 1078428 logs.go:284] No container was found matching "etcd"
	I1210 07:57:19.709470 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:57:19.709532 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:57:19.739384 1078428 cri.go:89] found id: ""
	I1210 07:57:19.739426 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.739436 1078428 logs.go:284] No container was found matching "coredns"
	I1210 07:57:19.739442 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:57:19.739502 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:57:19.767244 1078428 cri.go:89] found id: ""
	I1210 07:57:19.767266 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.767274 1078428 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:57:19.767281 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:57:19.767338 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:57:19.802183 1078428 cri.go:89] found id: ""
	I1210 07:57:19.802207 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.802216 1078428 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:57:19.802222 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:57:19.802283 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:57:19.864351 1078428 cri.go:89] found id: ""
	I1210 07:57:19.864373 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.864381 1078428 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:57:19.864388 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:57:19.864446 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:57:19.923313 1078428 cri.go:89] found id: ""
	I1210 07:57:19.923336 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.923344 1078428 logs.go:284] No container was found matching "kindnet"
	I1210 07:57:19.923350 1078428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:57:19.923412 1078428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:57:19.956689 1078428 cri.go:89] found id: ""
	I1210 07:57:19.956768 1078428 logs.go:282] 0 containers: []
	W1210 07:57:19.956792 1078428 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:57:19.956836 1078428 logs.go:123] Gathering logs for kubelet ...
	I1210 07:57:19.956870 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:57:20.020110 1078428 logs.go:123] Gathering logs for dmesg ...
	I1210 07:57:20.020150 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:57:20.041105 1078428 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:57:20.041136 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:57:20.171782 1078428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:20.151749   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.157532   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.158302   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161399   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161675   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:57:20.151749   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.157532   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.158302   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161399   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:20.161675   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:57:20.171803 1078428 logs.go:123] Gathering logs for containerd ...
	I1210 07:57:20.171817 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:57:20.212388 1078428 logs.go:123] Gathering logs for container status ...
	I1210 07:57:20.212467 1078428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:57:22.753904 1078428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:57:22.771857 1078428 out.go:203] 
	W1210 07:57:22.774733 1078428 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:57:22.774767 1078428 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:57:22.774778 1078428 out.go:285] * Related issues:
	W1210 07:57:22.774790 1078428 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:57:22.774803 1078428 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:57:22.777684 1078428 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780066864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780147053Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780256331Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780332672Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780400546Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780472966Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780539559Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.780607409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.781584850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.781686825Z" level=info msg="Connect containerd service"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.782018760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.782725912Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792587048Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792681047Z" level=info msg="Start recovering state"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792879967Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.792982622Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.827708066Z" level=info msg="Start event monitor"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.827890403Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.827954912Z" level=info msg="Start streaming server"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828030688Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828089839Z" level=info msg="runtime interface starting up..."
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828151371Z" level=info msg="starting plugins..."
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.828234219Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:51:20 newest-cni-237317 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:51:20 newest-cni-237317 containerd[554]: time="2025-12-10T07:51:20.830614962Z" level=info msg="containerd successfully booted in 0.079173s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:57:38.485260   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:38.486157   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:38.487765   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:38.488080   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:57:38.489811   13913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:20] overlayfs: idmapped layers are currently not supported
	[  +2.735648] overlayfs: idmapped layers are currently not supported
	[Dec10 05:21] overlayfs: idmapped layers are currently not supported
	[ +24.110991] overlayfs: idmapped layers are currently not supported
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[ +24.761042] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:57:38 up  6:39,  0 user,  load average: 1.02, 0.73, 1.25
	Linux newest-cni-237317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:57:35 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:35 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 10 07:57:35 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:35 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:35 newest-cni-237317 kubelet[13774]: E1210 07:57:35.878410   13774 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:35 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:35 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:36 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 10 07:57:36 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:36 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:36 newest-cni-237317 kubelet[13802]: E1210 07:57:36.607295   13802 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:36 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:36 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:37 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 10 07:57:37 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:37 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:37 newest-cni-237317 kubelet[13815]: E1210 07:57:37.372465   13815 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:37 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:37 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:57:38 newest-cni-237317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
	Dec 10 07:57:38 newest-cni-237317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:38 newest-cni-237317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:57:38 newest-cni-237317 kubelet[13820]: E1210 07:57:38.105888   13820 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:57:38 newest-cni-237317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:57:38 newest-cni-237317 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-237317 -n newest-cni-237317: exit status 2 (349.528418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-237317" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (11.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (270.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:06:43.005754  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/old-k8s-version-166796/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:06:47.802805  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:02.171482  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:15.481886  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:07:15.488386  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:07:15.499791  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:07:15.521336  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:07:15.562792  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:07:15.644175  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:15.806383  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:07:16.127939  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:16.769632  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:18.051011  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:18.860466  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:20.613176  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:25.734592  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:35.783239  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:07:35.975892  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1210 08:07:39.804488  786751 config.go:182] Loaded profile config "bridge-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:07:56.457508  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:08:24.093204  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:08:37.419054  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:08:53.712159  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:53.718590  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:53.730053  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:53.751540  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:53.793044  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:53.874484  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:54.036228  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:54.358163  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:08:54.999573  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:08:56.281664  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:08:58.843718  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:09:03.941490  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:09:03.964984  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:09:14.206731  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:09:16.546640  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/default-k8s-diff-port-444518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:09:34.688563  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:09:59.340698  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/calico-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:10:14.424735  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:10:15.650726  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/custom-flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:10:24.250996  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:10:40.233871  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/flannel-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:10:47.418745  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:10:47.425198  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:10:47.436644  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:10:47.458147  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:10:47.499654  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:10:47.581152  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:10:47.743424  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:10:48.065150  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:10:48.707337  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:10:49.989525  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 08:10:52.550922  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kindnet-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009: exit status 2 (350.767875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-587009" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-587009 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-587009 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.175µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-587009 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-587009
helpers_test.go:244: (dbg) docker inspect no-preload-587009:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	        "Created": "2025-12-10T07:40:57.013300176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1077472,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:51:10.781643992Z",
	            "FinishedAt": "2025-12-10T07:51:09.433560094Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hostname",
	        "HostsPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/hosts",
	        "LogPath": "/var/lib/docker/containers/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf/59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf-json.log",
	        "Name": "/no-preload-587009",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-587009:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-587009",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59d9b9413fb32bafdbb8b551706cebeca25e5acc4119388a67aa105ab4b74edf",
	                "LowerDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9586a7e7264b1f12de7086a58a5bd6ad7eded59233d4f726268049130993481/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-587009",
	                "Source": "/var/lib/docker/volumes/no-preload-587009/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-587009",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-587009",
	                "name.minikube.sigs.k8s.io": "no-preload-587009",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3027da22b232bea75e393d2b661101d643e6e04216f3ba2ece99c7a84ae4f2ee",
	            "SandboxKey": "/var/run/docker/netns/3027da22b232",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-587009": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:01:16:c7:75:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93aee88a5c37e6ba01b74d7794f328193d01cfce9cb66379ab00b3f3e2e73a48",
	                    "EndpointID": "4717ce896d8375f79b53590f55b234cfc29918d126a12ae9fa574429e9722162",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-587009",
	                        "59d9b9413fb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009: exit status 2 (298.356926ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-587009 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                            ARGS                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-945825 sudo iptables -t nat -L -n -v                                 │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl status kubelet --all --full --no-pager         │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl cat kubelet --no-pager                         │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo journalctl -xeu kubelet --all --full --no-pager          │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo cat /etc/kubernetes/kubelet.conf                         │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo cat /var/lib/kubelet/config.yaml                         │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl status docker --all --full --no-pager          │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │                     │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl cat docker --no-pager                          │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo cat /etc/docker/daemon.json                              │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │                     │
	│ ssh     │ -p enable-default-cni-945825 sudo docker system info                                       │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │                     │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl status cri-docker --all --full --no-pager      │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │                     │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl cat cri-docker --no-pager                      │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │                     │
	│ ssh     │ -p enable-default-cni-945825 sudo cat /usr/lib/systemd/system/cri-docker.service           │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo cri-dockerd --version                                    │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl status containerd --all --full --no-pager      │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl cat containerd --no-pager                      │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo cat /lib/systemd/system/containerd.service               │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo cat /etc/containerd/config.toml                          │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo containerd config dump                                   │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl status crio --all --full --no-pager            │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │                     │
	│ ssh     │ -p enable-default-cni-945825 sudo systemctl cat crio --no-pager                            │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ ssh     │ -p enable-default-cni-945825 sudo crio config                                              │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	│ delete  │ -p enable-default-cni-945825                                                               │ enable-default-cni-945825 │ jenkins │ v1.37.0 │ 10 Dec 25 08:09 UTC │ 10 Dec 25 08:09 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 08:08:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 08:08:11.081160 1140907 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:08:11.081397 1140907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:08:11.081426 1140907 out.go:374] Setting ErrFile to fd 2...
	I1210 08:08:11.081446 1140907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:08:11.081766 1140907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 08:08:11.082300 1140907 out.go:368] Setting JSON to false
	I1210 08:08:11.083370 1140907 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24615,"bootTime":1765329476,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 08:08:11.083484 1140907 start.go:143] virtualization:  
	I1210 08:08:11.087410 1140907 out.go:179] * [enable-default-cni-945825] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 08:08:11.092150 1140907 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:08:11.092194 1140907 notify.go:221] Checking for updates...
	I1210 08:08:11.099881 1140907 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:08:11.103226 1140907 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 08:08:11.106443 1140907 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 08:08:11.109602 1140907 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:08:11.112568 1140907 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:08:11.116425 1140907 config.go:182] Loaded profile config "no-preload-587009": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 08:08:11.116555 1140907 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:08:11.154451 1140907 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:08:11.154637 1140907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:08:11.213442 1140907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:08:11.203140231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:08:11.213559 1140907 docker.go:319] overlay module found
	I1210 08:08:11.216824 1140907 out.go:179] * Using the docker driver based on user configuration
	I1210 08:08:11.219854 1140907 start.go:309] selected driver: docker
	I1210 08:08:11.219879 1140907 start.go:927] validating driver "docker" against <nil>
	I1210 08:08:11.219907 1140907 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:08:11.220659 1140907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:08:11.273433 1140907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:08:11.264397327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:08:11.273600 1140907 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1210 08:08:11.273812 1140907 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1210 08:08:11.273842 1140907 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 08:08:11.276874 1140907 out.go:179] * Using Docker driver with root privileges
	I1210 08:08:11.279841 1140907 cni.go:84] Creating CNI manager for "bridge"
	I1210 08:08:11.279872 1140907 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 08:08:11.279955 1140907 start.go:353] cluster config:
	{Name:enable-default-cni-945825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-945825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:08:11.283127 1140907 out.go:179] * Starting "enable-default-cni-945825" primary control-plane node in "enable-default-cni-945825" cluster
	I1210 08:08:11.286105 1140907 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 08:08:11.289063 1140907 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 08:08:11.292115 1140907 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1210 08:08:11.292171 1140907 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1210 08:08:11.292187 1140907 cache.go:65] Caching tarball of preloaded images
	I1210 08:08:11.292219 1140907 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 08:08:11.292276 1140907 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1210 08:08:11.292287 1140907 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1210 08:08:11.292411 1140907 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/config.json ...
	I1210 08:08:11.292430 1140907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/config.json: {Name:mk2ea24b987c51edbba242758bb3419e4264489e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:11.312709 1140907 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 08:08:11.312737 1140907 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 08:08:11.312764 1140907 cache.go:243] Successfully downloaded all kic artifacts
	I1210 08:08:11.312800 1140907 start.go:360] acquireMachinesLock for enable-default-cni-945825: {Name:mk2457ee8c30e8975bbb881578e5d30cd9ff4b43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 08:08:11.312952 1140907 start.go:364] duration metric: took 130.609µs to acquireMachinesLock for "enable-default-cni-945825"
	I1210 08:08:11.312993 1140907 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-945825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-945825 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 08:08:11.313073 1140907 start.go:125] createHost starting for "" (driver="docker")
	I1210 08:08:11.316543 1140907 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 08:08:11.316801 1140907 start.go:159] libmachine.API.Create for "enable-default-cni-945825" (driver="docker")
	I1210 08:08:11.316842 1140907 client.go:173] LocalClient.Create starting
	I1210 08:08:11.316938 1140907 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
	I1210 08:08:11.316975 1140907 main.go:143] libmachine: Decoding PEM data...
	I1210 08:08:11.316998 1140907 main.go:143] libmachine: Parsing certificate...
	I1210 08:08:11.317068 1140907 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
	I1210 08:08:11.317127 1140907 main.go:143] libmachine: Decoding PEM data...
	I1210 08:08:11.317141 1140907 main.go:143] libmachine: Parsing certificate...
	I1210 08:08:11.317509 1140907 cli_runner.go:164] Run: docker network inspect enable-default-cni-945825 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 08:08:11.333820 1140907 cli_runner.go:211] docker network inspect enable-default-cni-945825 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 08:08:11.333942 1140907 network_create.go:284] running [docker network inspect enable-default-cni-945825] to gather additional debugging logs...
	I1210 08:08:11.333970 1140907 cli_runner.go:164] Run: docker network inspect enable-default-cni-945825
	W1210 08:08:11.350922 1140907 cli_runner.go:211] docker network inspect enable-default-cni-945825 returned with exit code 1
	I1210 08:08:11.350955 1140907 network_create.go:287] error running [docker network inspect enable-default-cni-945825]: docker network inspect enable-default-cni-945825: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-945825 not found
	I1210 08:08:11.350983 1140907 network_create.go:289] output of [docker network inspect enable-default-cni-945825]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-945825 not found
	
	** /stderr **
	I1210 08:08:11.351083 1140907 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 08:08:11.367601 1140907 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
	I1210 08:08:11.367949 1140907 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-948cd8ab8a49 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:79:0a:43:2f:62} reservation:<nil>}
	I1210 08:08:11.368300 1140907 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-21ed51b7c74f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:35:c0:64:58:42} reservation:<nil>}
	I1210 08:08:11.368752 1140907 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1e290}
	I1210 08:08:11.368776 1140907 network_create.go:124] attempt to create docker network enable-default-cni-945825 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 08:08:11.368839 1140907 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-945825 enable-default-cni-945825
	I1210 08:08:11.426829 1140907 network_create.go:108] docker network enable-default-cni-945825 192.168.76.0/24 created
	I1210 08:08:11.426865 1140907 kic.go:121] calculated static IP "192.168.76.2" for the "enable-default-cni-945825" container
	I1210 08:08:11.426949 1140907 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 08:08:11.443340 1140907 cli_runner.go:164] Run: docker volume create enable-default-cni-945825 --label name.minikube.sigs.k8s.io=enable-default-cni-945825 --label created_by.minikube.sigs.k8s.io=true
	I1210 08:08:11.461202 1140907 oci.go:103] Successfully created a docker volume enable-default-cni-945825
	I1210 08:08:11.461295 1140907 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-945825-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-945825 --entrypoint /usr/bin/test -v enable-default-cni-945825:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 08:08:11.983050 1140907 oci.go:107] Successfully prepared a docker volume enable-default-cni-945825
	I1210 08:08:11.983119 1140907 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1210 08:08:11.983129 1140907 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 08:08:11.983197 1140907 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-945825:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 08:08:16.011336 1140907 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-945825:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.028098488s)
	I1210 08:08:16.011376 1140907 kic.go:203] duration metric: took 4.028242111s to extract preloaded images to volume ...
	W1210 08:08:16.011540 1140907 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 08:08:16.011657 1140907 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 08:08:16.078109 1140907 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-945825 --name enable-default-cni-945825 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-945825 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-945825 --network enable-default-cni-945825 --ip 192.168.76.2 --volume enable-default-cni-945825:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 08:08:16.415520 1140907 cli_runner.go:164] Run: docker container inspect enable-default-cni-945825 --format={{.State.Running}}
	I1210 08:08:16.439530 1140907 cli_runner.go:164] Run: docker container inspect enable-default-cni-945825 --format={{.State.Status}}
	I1210 08:08:16.461394 1140907 cli_runner.go:164] Run: docker exec enable-default-cni-945825 stat /var/lib/dpkg/alternatives/iptables
	I1210 08:08:16.520019 1140907 oci.go:144] the created container "enable-default-cni-945825" has a running status.
	I1210 08:08:16.520050 1140907 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/enable-default-cni-945825/id_rsa...
	I1210 08:08:16.996581 1140907 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/enable-default-cni-945825/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 08:08:17.018772 1140907 cli_runner.go:164] Run: docker container inspect enable-default-cni-945825 --format={{.State.Status}}
	I1210 08:08:17.036918 1140907 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 08:08:17.036944 1140907 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-945825 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 08:08:17.096230 1140907 cli_runner.go:164] Run: docker container inspect enable-default-cni-945825 --format={{.State.Status}}
	I1210 08:08:17.117779 1140907 machine.go:94] provisionDockerMachine start ...
	I1210 08:08:17.117910 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:17.138602 1140907 main.go:143] libmachine: Using SSH client type: native
	I1210 08:08:17.138963 1140907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33880 <nil> <nil>}
	I1210 08:08:17.138980 1140907 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 08:08:17.139622 1140907 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33112->127.0.0.1:33880: read: connection reset by peer
	I1210 08:08:20.274134 1140907 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-945825
	
	I1210 08:08:20.274160 1140907 ubuntu.go:182] provisioning hostname "enable-default-cni-945825"
	I1210 08:08:20.274228 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:20.292113 1140907 main.go:143] libmachine: Using SSH client type: native
	I1210 08:08:20.292435 1140907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33880 <nil> <nil>}
	I1210 08:08:20.292455 1140907 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-945825 && echo "enable-default-cni-945825" | sudo tee /etc/hostname
	I1210 08:08:20.436315 1140907 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-945825
	
	I1210 08:08:20.436472 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:20.453511 1140907 main.go:143] libmachine: Using SSH client type: native
	I1210 08:08:20.453820 1140907 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33880 <nil> <nil>}
	I1210 08:08:20.453837 1140907 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-945825' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-945825/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-945825' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 08:08:20.586437 1140907 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 08:08:20.586500 1140907 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
	I1210 08:08:20.586557 1140907 ubuntu.go:190] setting up certificates
	I1210 08:08:20.586573 1140907 provision.go:84] configureAuth start
	I1210 08:08:20.586652 1140907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-945825
	I1210 08:08:20.604113 1140907 provision.go:143] copyHostCerts
	I1210 08:08:20.604193 1140907 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
	I1210 08:08:20.604208 1140907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
	I1210 08:08:20.604289 1140907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
	I1210 08:08:20.604388 1140907 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
	I1210 08:08:20.604397 1140907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
	I1210 08:08:20.604424 1140907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
	I1210 08:08:20.604481 1140907 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
	I1210 08:08:20.604495 1140907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
	I1210 08:08:20.604521 1140907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
	I1210 08:08:20.604571 1140907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-945825 san=[127.0.0.1 192.168.76.2 enable-default-cni-945825 localhost minikube]
	I1210 08:08:21.085260 1140907 provision.go:177] copyRemoteCerts
	I1210 08:08:21.085332 1140907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 08:08:21.085387 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:21.102062 1140907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/enable-default-cni-945825/id_rsa Username:docker}
	I1210 08:08:21.198326 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 08:08:21.215704 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 08:08:21.233598 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 08:08:21.251433 1140907 provision.go:87] duration metric: took 664.840902ms to configureAuth
	I1210 08:08:21.251463 1140907 ubuntu.go:206] setting minikube options for container-runtime
	I1210 08:08:21.251655 1140907 config.go:182] Loaded profile config "enable-default-cni-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 08:08:21.251669 1140907 machine.go:97] duration metric: took 4.133869803s to provisionDockerMachine
	I1210 08:08:21.251677 1140907 client.go:176] duration metric: took 9.934822427s to LocalClient.Create
	I1210 08:08:21.251706 1140907 start.go:167] duration metric: took 9.934906398s to libmachine.API.Create "enable-default-cni-945825"
	I1210 08:08:21.251718 1140907 start.go:293] postStartSetup for "enable-default-cni-945825" (driver="docker")
	I1210 08:08:21.251727 1140907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 08:08:21.251796 1140907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 08:08:21.251844 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:21.268548 1140907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/enable-default-cni-945825/id_rsa Username:docker}
	I1210 08:08:21.366650 1140907 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 08:08:21.369981 1140907 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 08:08:21.370009 1140907 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 08:08:21.370021 1140907 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
	I1210 08:08:21.370075 1140907 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
	I1210 08:08:21.370155 1140907 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
	I1210 08:08:21.370259 1140907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 08:08:21.377818 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 08:08:21.395454 1140907 start.go:296] duration metric: took 143.721401ms for postStartSetup
	I1210 08:08:21.395844 1140907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-945825
	I1210 08:08:21.412752 1140907 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/config.json ...
	I1210 08:08:21.413049 1140907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:08:21.413092 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:21.430387 1140907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/enable-default-cni-945825/id_rsa Username:docker}
	I1210 08:08:21.527543 1140907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 08:08:21.532417 1140907 start.go:128] duration metric: took 10.219327904s to createHost
	I1210 08:08:21.532442 1140907 start.go:83] releasing machines lock for "enable-default-cni-945825", held for 10.219472118s
	I1210 08:08:21.532516 1140907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-945825
	I1210 08:08:21.556966 1140907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 08:08:21.557127 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:21.557192 1140907 ssh_runner.go:195] Run: cat /version.json
	I1210 08:08:21.557225 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:21.579244 1140907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/enable-default-cni-945825/id_rsa Username:docker}
	I1210 08:08:21.599976 1140907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/enable-default-cni-945825/id_rsa Username:docker}
	I1210 08:08:21.784756 1140907 ssh_runner.go:195] Run: systemctl --version
	I1210 08:08:21.791265 1140907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 08:08:21.795682 1140907 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 08:08:21.795794 1140907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 08:08:21.823399 1140907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 08:08:21.823425 1140907 start.go:496] detecting cgroup driver to use...
	I1210 08:08:21.823458 1140907 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 08:08:21.823520 1140907 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 08:08:21.838425 1140907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 08:08:21.851237 1140907 docker.go:218] disabling cri-docker service (if available) ...
	I1210 08:08:21.851312 1140907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 08:08:21.869016 1140907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 08:08:21.887481 1140907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 08:08:22.017348 1140907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 08:08:22.147394 1140907 docker.go:234] disabling docker service ...
	I1210 08:08:22.147466 1140907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 08:08:22.170698 1140907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 08:08:22.185219 1140907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 08:08:22.302257 1140907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 08:08:22.439119 1140907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 08:08:22.452720 1140907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 08:08:22.467236 1140907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 08:08:22.476774 1140907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 08:08:22.486129 1140907 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 08:08:22.486206 1140907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 08:08:22.495621 1140907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 08:08:22.505084 1140907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 08:08:22.514525 1140907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 08:08:22.523697 1140907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 08:08:22.532321 1140907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 08:08:22.541617 1140907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 08:08:22.551791 1140907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 08:08:22.560936 1140907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 08:08:22.568524 1140907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 08:08:22.576327 1140907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:08:22.684317 1140907 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 08:08:22.825760 1140907 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 08:08:22.825879 1140907 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 08:08:22.829764 1140907 start.go:564] Will wait 60s for crictl version
	I1210 08:08:22.829876 1140907 ssh_runner.go:195] Run: which crictl
	I1210 08:08:22.833434 1140907 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 08:08:22.856948 1140907 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 08:08:22.857098 1140907 ssh_runner.go:195] Run: containerd --version
	I1210 08:08:22.881209 1140907 ssh_runner.go:195] Run: containerd --version
	I1210 08:08:22.911380 1140907 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1210 08:08:22.914545 1140907 cli_runner.go:164] Run: docker network inspect enable-default-cni-945825 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 08:08:22.930967 1140907 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 08:08:22.934822 1140907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:08:22.945087 1140907 kubeadm.go:884] updating cluster {Name:enable-default-cni-945825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-945825 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 08:08:22.945204 1140907 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1210 08:08:22.945272 1140907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:08:22.970164 1140907 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 08:08:22.970192 1140907 containerd.go:534] Images already preloaded, skipping extraction
	I1210 08:08:22.970253 1140907 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:08:22.994668 1140907 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 08:08:22.994691 1140907 cache_images.go:86] Images are preloaded, skipping loading
	I1210 08:08:22.994700 1140907 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1210 08:08:22.994789 1140907 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-945825 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-945825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1210 08:08:22.994854 1140907 ssh_runner.go:195] Run: sudo crictl info
	I1210 08:08:23.025518 1140907 cni.go:84] Creating CNI manager for "bridge"
	I1210 08:08:23.025551 1140907 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 08:08:23.025574 1140907 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-945825 NodeName:enable-default-cni-945825 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 08:08:23.025698 1140907 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-945825"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 08:08:23.025774 1140907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 08:08:23.033697 1140907 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 08:08:23.033769 1140907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 08:08:23.041750 1140907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1210 08:08:23.062983 1140907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 08:08:23.078590 1140907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2238 bytes)
	I1210 08:08:23.093716 1140907 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 08:08:23.097668 1140907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:08:23.108914 1140907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:08:23.217791 1140907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 08:08:23.234040 1140907 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825 for IP: 192.168.76.2
	I1210 08:08:23.234129 1140907 certs.go:195] generating shared ca certs ...
	I1210 08:08:23.234168 1140907 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:23.234406 1140907 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
	I1210 08:08:23.234583 1140907 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
	I1210 08:08:23.234626 1140907 certs.go:257] generating profile certs ...
	I1210 08:08:23.234720 1140907 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/client.key
	I1210 08:08:23.234752 1140907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/client.crt with IP's: []
	I1210 08:08:23.366982 1140907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/client.crt ...
	I1210 08:08:23.367016 1140907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/client.crt: {Name:mkb3bac58075cbc7e3095b5a4bd03f8ac87d2d25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:23.367228 1140907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/client.key ...
	I1210 08:08:23.367245 1140907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/client.key: {Name:mkc2a553cc19d1ac3dafcdca4d70871ee16563bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:23.367349 1140907 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.key.a9921057
	I1210 08:08:23.367368 1140907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.crt.a9921057 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 08:08:23.469653 1140907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.crt.a9921057 ...
	I1210 08:08:23.469694 1140907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.crt.a9921057: {Name:mk17e95642a0bf7a7ed18c81a8be195acbd57e14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:23.469885 1140907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.key.a9921057 ...
	I1210 08:08:23.469899 1140907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.key.a9921057: {Name:mk106cfc16be3df5488b3bdece5bdfd8951967df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:23.469985 1140907 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.crt.a9921057 -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.crt
	I1210 08:08:23.470064 1140907 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.key.a9921057 -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.key
	I1210 08:08:23.470128 1140907 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/proxy-client.key
	I1210 08:08:23.470149 1140907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/proxy-client.crt with IP's: []
	I1210 08:08:23.640759 1140907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/proxy-client.crt ...
	I1210 08:08:23.640790 1140907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/proxy-client.crt: {Name:mk404d5cfda01fdc5e537bfe840cb0d072381c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:23.640976 1140907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/proxy-client.key ...
	I1210 08:08:23.640990 1140907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/proxy-client.key: {Name:mk3c58fdf2d011c3e53b6f55ea28f375aedc648a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:23.641193 1140907 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
	W1210 08:08:23.641251 1140907 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
	I1210 08:08:23.641266 1140907 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 08:08:23.641295 1140907 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
	I1210 08:08:23.641322 1140907 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
	I1210 08:08:23.641352 1140907 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
	I1210 08:08:23.641403 1140907 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
	I1210 08:08:23.641985 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 08:08:23.661419 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 08:08:23.680620 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 08:08:23.699185 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 08:08:23.717742 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 08:08:23.735110 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 08:08:23.754088 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 08:08:23.773205 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/enable-default-cni-945825/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 08:08:23.791692 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
	I1210 08:08:23.810197 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
	I1210 08:08:23.832088 1140907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 08:08:23.852291 1140907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 08:08:23.870264 1140907 ssh_runner.go:195] Run: openssl version
	I1210 08:08:23.876783 1140907 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
	I1210 08:08:23.884658 1140907 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
	I1210 08:08:23.892374 1140907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
	I1210 08:08:23.896323 1140907 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
	I1210 08:08:23.896389 1140907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
	I1210 08:08:23.939169 1140907 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 08:08:23.946881 1140907 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
	I1210 08:08:23.955920 1140907 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
	I1210 08:08:23.963285 1140907 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
	I1210 08:08:23.971069 1140907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
	I1210 08:08:23.975006 1140907 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
	I1210 08:08:23.975071 1140907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
	I1210 08:08:24.016435 1140907 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 08:08:24.024386 1140907 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
	I1210 08:08:24.032498 1140907 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:08:24.040236 1140907 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 08:08:24.048116 1140907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:08:24.052859 1140907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:08:24.052927 1140907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:08:24.096018 1140907 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 08:08:24.104291 1140907 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 08:08:24.112254 1140907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 08:08:24.117572 1140907 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 08:08:24.117656 1140907 kubeadm.go:401] StartCluster: {Name:enable-default-cni-945825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-945825 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:08:24.117767 1140907 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 08:08:24.117857 1140907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 08:08:24.144816 1140907 cri.go:89] found id: ""
	I1210 08:08:24.144916 1140907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 08:08:24.152844 1140907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 08:08:24.160873 1140907 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 08:08:24.160984 1140907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 08:08:24.169215 1140907 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 08:08:24.169246 1140907 kubeadm.go:158] found existing configuration files:
	
	I1210 08:08:24.169300 1140907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 08:08:24.177628 1140907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 08:08:24.177696 1140907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 08:08:24.185470 1140907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 08:08:24.193544 1140907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 08:08:24.193614 1140907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 08:08:24.201093 1140907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 08:08:24.209031 1140907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 08:08:24.209148 1140907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 08:08:24.216670 1140907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 08:08:24.224712 1140907 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 08:08:24.224818 1140907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 08:08:24.232307 1140907 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 08:08:24.275331 1140907 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 08:08:24.275459 1140907 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 08:08:24.300155 1140907 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 08:08:24.300316 1140907 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 08:08:24.300381 1140907 kubeadm.go:319] OS: Linux
	I1210 08:08:24.300465 1140907 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 08:08:24.300539 1140907 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 08:08:24.300618 1140907 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 08:08:24.300692 1140907 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 08:08:24.300796 1140907 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 08:08:24.300875 1140907 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 08:08:24.300950 1140907 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 08:08:24.301026 1140907 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 08:08:24.301104 1140907 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 08:08:24.373642 1140907 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 08:08:24.373796 1140907 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 08:08:24.373907 1140907 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 08:08:24.379388 1140907 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 08:08:24.386298 1140907 out.go:252]   - Generating certificates and keys ...
	I1210 08:08:24.386411 1140907 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 08:08:24.386524 1140907 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 08:08:24.887465 1140907 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 08:08:25.169776 1140907 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 08:08:25.464408 1140907 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 08:08:26.275955 1140907 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 08:08:27.763696 1140907 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 08:08:27.764079 1140907 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-945825 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 08:08:28.398534 1140907 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 08:08:28.398708 1140907 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-945825 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 08:08:28.510124 1140907 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 08:08:28.822692 1140907 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 08:08:29.449795 1140907 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 08:08:29.450089 1140907 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 08:08:30.344363 1140907 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 08:08:30.895627 1140907 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 08:08:31.419302 1140907 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 08:08:31.608160 1140907 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 08:08:31.898099 1140907 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 08:08:31.898567 1140907 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 08:08:31.901174 1140907 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 08:08:31.904749 1140907 out.go:252]   - Booting up control plane ...
	I1210 08:08:31.904862 1140907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 08:08:31.904941 1140907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 08:08:31.905008 1140907 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 08:08:31.922914 1140907 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 08:08:31.923036 1140907 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 08:08:31.930937 1140907 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 08:08:31.931222 1140907 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 08:08:31.931437 1140907 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 08:08:32.067427 1140907 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 08:08:32.067548 1140907 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:08:33.572515 1140907 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.505141733s
	I1210 08:08:33.576779 1140907 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 08:08:33.576882 1140907 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1210 08:08:33.577493 1140907 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 08:08:33.577587 1140907 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 08:08:36.307448 1140907 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.730021381s
	I1210 08:08:38.562567 1140907 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.98575282s
	I1210 08:08:40.578400 1140907 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001369843s
	I1210 08:08:40.610538 1140907 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 08:08:40.627800 1140907 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 08:08:40.642917 1140907 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 08:08:40.643136 1140907 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-945825 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 08:08:40.657697 1140907 kubeadm.go:319] [bootstrap-token] Using token: vfnia6.t2ahqfnlrok82xum
	I1210 08:08:40.660836 1140907 out.go:252]   - Configuring RBAC rules ...
	I1210 08:08:40.660972 1140907 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 08:08:40.665765 1140907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 08:08:40.677453 1140907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 08:08:40.681671 1140907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 08:08:40.685966 1140907 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 08:08:40.692291 1140907 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 08:08:40.987569 1140907 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 08:08:41.411949 1140907 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 08:08:41.984934 1140907 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 08:08:41.986340 1140907 kubeadm.go:319] 
	I1210 08:08:41.986418 1140907 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 08:08:41.986430 1140907 kubeadm.go:319] 
	I1210 08:08:41.986531 1140907 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 08:08:41.986537 1140907 kubeadm.go:319] 
	I1210 08:08:41.986560 1140907 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 08:08:41.986615 1140907 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 08:08:41.986662 1140907 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 08:08:41.986666 1140907 kubeadm.go:319] 
	I1210 08:08:41.986716 1140907 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 08:08:41.986720 1140907 kubeadm.go:319] 
	I1210 08:08:41.986764 1140907 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 08:08:41.986768 1140907 kubeadm.go:319] 
	I1210 08:08:41.986816 1140907 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 08:08:41.986886 1140907 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 08:08:41.986950 1140907 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 08:08:41.986954 1140907 kubeadm.go:319] 
	I1210 08:08:41.987033 1140907 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 08:08:41.987105 1140907 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 08:08:41.987109 1140907 kubeadm.go:319] 
	I1210 08:08:41.987188 1140907 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vfnia6.t2ahqfnlrok82xum \
	I1210 08:08:41.987285 1140907 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e9f3cb78cb77d4f01fb49055e1f2de1580fc701c72db340d5c15a42a39b8dd0 \
	I1210 08:08:41.987304 1140907 kubeadm.go:319] 	--control-plane 
	I1210 08:08:41.987308 1140907 kubeadm.go:319] 
	I1210 08:08:41.987387 1140907 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 08:08:41.987391 1140907 kubeadm.go:319] 
	I1210 08:08:41.987468 1140907 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vfnia6.t2ahqfnlrok82xum \
	I1210 08:08:41.987564 1140907 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e9f3cb78cb77d4f01fb49055e1f2de1580fc701c72db340d5c15a42a39b8dd0 
	I1210 08:08:41.990915 1140907 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 08:08:41.991144 1140907 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:08:41.991255 1140907 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:08:41.991275 1140907 cni.go:84] Creating CNI manager for "bridge"
	I1210 08:08:41.994501 1140907 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 08:08:41.997464 1140907 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 08:08:42.008269 1140907 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 08:08:42.027012 1140907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 08:08:42.027140 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:42.027206 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-945825 minikube.k8s.io/updated_at=2025_12_10T08_08_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=enable-default-cni-945825 minikube.k8s.io/primary=true
	I1210 08:08:42.197122 1140907 ops.go:34] apiserver oom_adj: -16
	I1210 08:08:42.197232 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:42.697732 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:43.197320 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:43.697313 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:44.197731 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:44.697377 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:45.197433 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:45.697832 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:46.198078 1140907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 08:08:46.300028 1140907 kubeadm.go:1114] duration metric: took 4.272937779s to wait for elevateKubeSystemPrivileges
	I1210 08:08:46.300061 1140907 kubeadm.go:403] duration metric: took 22.182427559s to StartCluster
	I1210 08:08:46.300096 1140907 settings.go:142] acquiring lock: {Name:mkea9bfe0055ec090e4ce10a287599541b65b38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:46.300162 1140907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 08:08:46.301229 1140907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/kubeconfig: {Name:mk033286c712acd7067ce019b396f50f87435538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:08:46.301458 1140907 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 08:08:46.301562 1140907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 08:08:46.301827 1140907 config.go:182] Loaded profile config "enable-default-cni-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 08:08:46.301873 1140907 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 08:08:46.301937 1140907 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-945825"
	I1210 08:08:46.301951 1140907 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-945825"
	I1210 08:08:46.301974 1140907 host.go:66] Checking if "enable-default-cni-945825" exists ...
	I1210 08:08:46.302755 1140907 cli_runner.go:164] Run: docker container inspect enable-default-cni-945825 --format={{.State.Status}}
	I1210 08:08:46.302902 1140907 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-945825"
	I1210 08:08:46.302923 1140907 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-945825"
	I1210 08:08:46.303188 1140907 cli_runner.go:164] Run: docker container inspect enable-default-cni-945825 --format={{.State.Status}}
	I1210 08:08:46.305475 1140907 out.go:179] * Verifying Kubernetes components...
	I1210 08:08:46.310849 1140907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:08:46.344149 1140907 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-945825"
	I1210 08:08:46.344191 1140907 host.go:66] Checking if "enable-default-cni-945825" exists ...
	I1210 08:08:46.344651 1140907 cli_runner.go:164] Run: docker container inspect enable-default-cni-945825 --format={{.State.Status}}
	I1210 08:08:46.349477 1140907 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 08:08:46.352678 1140907 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 08:08:46.352711 1140907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 08:08:46.352784 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:46.392118 1140907 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 08:08:46.392141 1140907 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 08:08:46.392204 1140907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-945825
	I1210 08:08:46.413405 1140907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/enable-default-cni-945825/id_rsa Username:docker}
	I1210 08:08:46.428145 1140907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/enable-default-cni-945825/id_rsa Username:docker}
	I1210 08:08:46.783094 1140907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 08:08:46.783337 1140907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 08:08:46.792519 1140907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 08:08:46.820966 1140907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 08:08:47.646396 1140907 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1210 08:08:47.648415 1140907 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-945825" to be "Ready" ...
	I1210 08:08:47.680174 1140907 node_ready.go:49] node "enable-default-cni-945825" is "Ready"
	I1210 08:08:47.680248 1140907 node_ready.go:38] duration metric: took 31.791003ms for node "enable-default-cni-945825" to be "Ready" ...
	I1210 08:08:47.680278 1140907 api_server.go:52] waiting for apiserver process to appear ...
	I1210 08:08:47.680365 1140907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:08:48.060615 1140907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.239568875s)
	I1210 08:08:48.060719 1140907 api_server.go:72] duration metric: took 1.759229429s to wait for apiserver process to appear ...
	I1210 08:08:48.060846 1140907 api_server.go:88] waiting for apiserver healthz status ...
	I1210 08:08:48.060865 1140907 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:08:48.066714 1140907 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 08:08:48.070093 1140907 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 08:08:48.070592 1140907 addons.go:530] duration metric: took 1.768713884s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1210 08:08:48.071099 1140907 api_server.go:141] control plane version: v1.34.2
	I1210 08:08:48.071118 1140907 api_server.go:131] duration metric: took 10.266412ms to wait for apiserver health ...
	I1210 08:08:48.071127 1140907 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 08:08:48.075285 1140907 system_pods.go:59] 8 kube-system pods found
	I1210 08:08:48.075327 1140907 system_pods.go:61] "coredns-66bc5c9577-2qf24" [93600545-a8fc-471e-8401-8c9da08dab5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:08:48.075335 1140907 system_pods.go:61] "coredns-66bc5c9577-lmlcp" [4d9035dd-08dc-4b70-b294-cfe8e714620e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:08:48.075343 1140907 system_pods.go:61] "etcd-enable-default-cni-945825" [662d005a-1932-42c4-89ef-b550383ab640] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 08:08:48.075355 1140907 system_pods.go:61] "kube-apiserver-enable-default-cni-945825" [07785745-0373-41c8-9e07-c634a9065c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 08:08:48.075367 1140907 system_pods.go:61] "kube-controller-manager-enable-default-cni-945825" [eee5c7ab-a121-45d8-8570-de6e6da492b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 08:08:48.075372 1140907 system_pods.go:61] "kube-proxy-rzd6l" [cefcc132-a5b4-4be8-b0d7-2e3c64653e53] Running
	I1210 08:08:48.075378 1140907 system_pods.go:61] "kube-scheduler-enable-default-cni-945825" [8b093420-d951-43a4-ab3f-f7d2d7fdad26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 08:08:48.075384 1140907 system_pods.go:61] "storage-provisioner" [3f6fd13f-a57d-48ce-b346-494a2ea2513b] Pending
	I1210 08:08:48.075402 1140907 system_pods.go:74] duration metric: took 4.269904ms to wait for pod list to return data ...
	I1210 08:08:48.075410 1140907 default_sa.go:34] waiting for default service account to be created ...
	I1210 08:08:48.078435 1140907 default_sa.go:45] found service account: "default"
	I1210 08:08:48.078536 1140907 default_sa.go:55] duration metric: took 3.11875ms for default service account to be created ...
	I1210 08:08:48.078556 1140907 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 08:08:48.091927 1140907 system_pods.go:86] 8 kube-system pods found
	I1210 08:08:48.091971 1140907 system_pods.go:89] "coredns-66bc5c9577-2qf24" [93600545-a8fc-471e-8401-8c9da08dab5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:08:48.092006 1140907 system_pods.go:89] "coredns-66bc5c9577-lmlcp" [4d9035dd-08dc-4b70-b294-cfe8e714620e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:08:48.092022 1140907 system_pods.go:89] "etcd-enable-default-cni-945825" [662d005a-1932-42c4-89ef-b550383ab640] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 08:08:48.092031 1140907 system_pods.go:89] "kube-apiserver-enable-default-cni-945825" [07785745-0373-41c8-9e07-c634a9065c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 08:08:48.092042 1140907 system_pods.go:89] "kube-controller-manager-enable-default-cni-945825" [eee5c7ab-a121-45d8-8570-de6e6da492b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 08:08:48.092048 1140907 system_pods.go:89] "kube-proxy-rzd6l" [cefcc132-a5b4-4be8-b0d7-2e3c64653e53] Running
	I1210 08:08:48.092055 1140907 system_pods.go:89] "kube-scheduler-enable-default-cni-945825" [8b093420-d951-43a4-ab3f-f7d2d7fdad26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 08:08:48.092065 1140907 system_pods.go:89] "storage-provisioner" [3f6fd13f-a57d-48ce-b346-494a2ea2513b] Pending
	I1210 08:08:48.092102 1140907 retry.go:31] will retry after 257.31528ms: missing components: kube-dns
	I1210 08:08:48.150865 1140907 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-945825" context rescaled to 1 replicas
	I1210 08:08:48.358107 1140907 system_pods.go:86] 8 kube-system pods found
	I1210 08:08:48.358211 1140907 system_pods.go:89] "coredns-66bc5c9577-2qf24" [93600545-a8fc-471e-8401-8c9da08dab5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:08:48.358235 1140907 system_pods.go:89] "coredns-66bc5c9577-lmlcp" [4d9035dd-08dc-4b70-b294-cfe8e714620e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:08:48.358286 1140907 system_pods.go:89] "etcd-enable-default-cni-945825" [662d005a-1932-42c4-89ef-b550383ab640] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 08:08:48.358328 1140907 system_pods.go:89] "kube-apiserver-enable-default-cni-945825" [07785745-0373-41c8-9e07-c634a9065c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 08:08:48.358372 1140907 system_pods.go:89] "kube-controller-manager-enable-default-cni-945825" [eee5c7ab-a121-45d8-8570-de6e6da492b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 08:08:48.358403 1140907 system_pods.go:89] "kube-proxy-rzd6l" [cefcc132-a5b4-4be8-b0d7-2e3c64653e53] Running
	I1210 08:08:48.358426 1140907 system_pods.go:89] "kube-scheduler-enable-default-cni-945825" [8b093420-d951-43a4-ab3f-f7d2d7fdad26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 08:08:48.358461 1140907 system_pods.go:89] "storage-provisioner" [3f6fd13f-a57d-48ce-b346-494a2ea2513b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 08:08:48.358514 1140907 retry.go:31] will retry after 304.047941ms: missing components: kube-dns
	I1210 08:08:48.668609 1140907 system_pods.go:86] 8 kube-system pods found
	I1210 08:08:48.668705 1140907 system_pods.go:89] "coredns-66bc5c9577-2qf24" [93600545-a8fc-471e-8401-8c9da08dab5c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:08:48.668732 1140907 system_pods.go:89] "coredns-66bc5c9577-lmlcp" [4d9035dd-08dc-4b70-b294-cfe8e714620e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:08:48.668773 1140907 system_pods.go:89] "etcd-enable-default-cni-945825" [662d005a-1932-42c4-89ef-b550383ab640] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 08:08:48.668798 1140907 system_pods.go:89] "kube-apiserver-enable-default-cni-945825" [07785745-0373-41c8-9e07-c634a9065c63] Running
	I1210 08:08:48.668822 1140907 system_pods.go:89] "kube-controller-manager-enable-default-cni-945825" [eee5c7ab-a121-45d8-8570-de6e6da492b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 08:08:48.668855 1140907 system_pods.go:89] "kube-proxy-rzd6l" [cefcc132-a5b4-4be8-b0d7-2e3c64653e53] Running
	I1210 08:08:48.668883 1140907 system_pods.go:89] "kube-scheduler-enable-default-cni-945825" [8b093420-d951-43a4-ab3f-f7d2d7fdad26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 08:08:48.668909 1140907 system_pods.go:89] "storage-provisioner" [3f6fd13f-a57d-48ce-b346-494a2ea2513b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 08:08:48.668957 1140907 system_pods.go:126] duration metric: took 590.392201ms to wait for k8s-apps to be running ...
	I1210 08:08:48.668980 1140907 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 08:08:48.669065 1140907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:08:48.688527 1140907 system_svc.go:56] duration metric: took 19.53764ms WaitForService to wait for kubelet
	I1210 08:08:48.688607 1140907 kubeadm.go:587] duration metric: took 2.387116383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 08:08:48.688664 1140907 node_conditions.go:102] verifying NodePressure condition ...
	I1210 08:08:48.693247 1140907 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 08:08:48.693328 1140907 node_conditions.go:123] node cpu capacity is 2
	I1210 08:08:48.693357 1140907 node_conditions.go:105] duration metric: took 4.674613ms to run NodePressure ...
	I1210 08:08:48.693404 1140907 start.go:242] waiting for startup goroutines ...
	I1210 08:08:48.693429 1140907 start.go:247] waiting for cluster config update ...
	I1210 08:08:48.693453 1140907 start.go:256] writing updated cluster config ...
	I1210 08:08:48.693797 1140907 ssh_runner.go:195] Run: rm -f paused
	I1210 08:08:48.697990 1140907 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 08:08:48.703800 1140907 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2qf24" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 08:08:50.709038 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-2qf24" is not "Ready", error: <nil>
	W1210 08:08:52.715467 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-2qf24" is not "Ready", error: <nil>
	W1210 08:08:55.209481 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-2qf24" is not "Ready", error: <nil>
	W1210 08:08:57.709696 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-2qf24" is not "Ready", error: <nil>
	I1210 08:08:58.706394 1140907 pod_ready.go:99] pod "coredns-66bc5c9577-2qf24" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-2qf24" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-2qf24" not found
	I1210 08:08:58.706422 1140907 pod_ready.go:86] duration metric: took 10.002542451s for pod "coredns-66bc5c9577-2qf24" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:08:58.706431 1140907 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lmlcp" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 08:09:00.712494 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-lmlcp" is not "Ready", error: <nil>
	W1210 08:09:02.712534 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-lmlcp" is not "Ready", error: <nil>
	W1210 08:09:04.712831 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-lmlcp" is not "Ready", error: <nil>
	W1210 08:09:07.212189 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-lmlcp" is not "Ready", error: <nil>
	W1210 08:09:09.212459 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-lmlcp" is not "Ready", error: <nil>
	W1210 08:09:11.712521 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-lmlcp" is not "Ready", error: <nil>
	W1210 08:09:14.211833 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-lmlcp" is not "Ready", error: <nil>
	W1210 08:09:16.213899 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-lmlcp" is not "Ready", error: <nil>
	W1210 08:09:18.712723 1140907 pod_ready.go:104] pod "coredns-66bc5c9577-lmlcp" is not "Ready", error: <nil>
	I1210 08:09:20.711894 1140907 pod_ready.go:94] pod "coredns-66bc5c9577-lmlcp" is "Ready"
	I1210 08:09:20.711919 1140907 pod_ready.go:86] duration metric: took 22.005481421s for pod "coredns-66bc5c9577-lmlcp" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:20.714685 1140907 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:20.718948 1140907 pod_ready.go:94] pod "etcd-enable-default-cni-945825" is "Ready"
	I1210 08:09:20.718974 1140907 pod_ready.go:86] duration metric: took 4.265263ms for pod "etcd-enable-default-cni-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:20.721003 1140907 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:20.725136 1140907 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-945825" is "Ready"
	I1210 08:09:20.725166 1140907 pod_ready.go:86] duration metric: took 4.140061ms for pod "kube-apiserver-enable-default-cni-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:20.727616 1140907 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:20.909839 1140907 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-945825" is "Ready"
	I1210 08:09:20.909916 1140907 pod_ready.go:86] duration metric: took 182.270901ms for pod "kube-controller-manager-enable-default-cni-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:21.110450 1140907 pod_ready.go:83] waiting for pod "kube-proxy-rzd6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:21.510438 1140907 pod_ready.go:94] pod "kube-proxy-rzd6l" is "Ready"
	I1210 08:09:21.510489 1140907 pod_ready.go:86] duration metric: took 399.980879ms for pod "kube-proxy-rzd6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:21.709452 1140907 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:22.109376 1140907 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-945825" is "Ready"
	I1210 08:09:22.109411 1140907 pod_ready.go:86] duration metric: took 399.931634ms for pod "kube-scheduler-enable-default-cni-945825" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:09:22.109424 1140907 pod_ready.go:40] duration metric: took 33.411358929s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 08:09:22.162210 1140907 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1210 08:09:22.165356 1140907 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-945825" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820886372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820897753Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820941675Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820957323Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820967374Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820979354Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.820991735Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821002452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821025221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821069053Z" level=info msg="Connect containerd service"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821339826Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.821931810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.835633697Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.835889266Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.835806303Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.838543186Z" level=info msg="Start recovering state"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862645834Z" level=info msg="Start event monitor"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862821648Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862884336Z" level=info msg="Start streaming server"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.862946598Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.863002574Z" level=info msg="runtime interface starting up..."
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.863060848Z" level=info msg="starting plugins..."
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.863142670Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:51:16 no-preload-587009 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:51:16 no-preload-587009 containerd[556]: time="2025-12-10T07:51:16.866796941Z" level=info msg="containerd successfully booted in 0.072064s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:10:57.111793   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 08:10:57.112540   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 08:10:57.114173   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 08:10:57.114740   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 08:10:57.116334   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:22] overlayfs: idmapped layers are currently not supported
	[Dec10 05:23] overlayfs: idmapped layers are currently not supported
	[Dec10 05:25] overlayfs: idmapped layers are currently not supported
	[Dec10 05:27] overlayfs: idmapped layers are currently not supported
	[  +0.867763] overlayfs: idmapped layers are currently not supported
	[Dec10 05:29] overlayfs: idmapped layers are currently not supported
	[Dec10 05:40] overlayfs: idmapped layers are currently not supported
	[Dec10 05:41] overlayfs: idmapped layers are currently not supported
	[Dec10 05:42] overlayfs: idmapped layers are currently not supported
	[ +24.057374] overlayfs: idmapped layers are currently not supported
	[Dec10 05:43] overlayfs: idmapped layers are currently not supported
	[Dec10 05:44] overlayfs: idmapped layers are currently not supported
	[Dec10 05:45] overlayfs: idmapped layers are currently not supported
	[Dec10 05:46] overlayfs: idmapped layers are currently not supported
	[Dec10 05:47] overlayfs: idmapped layers are currently not supported
	[Dec10 05:48] overlayfs: idmapped layers are currently not supported
	[Dec10 05:50] overlayfs: idmapped layers are currently not supported
	[Dec10 06:08] overlayfs: idmapped layers are currently not supported
	[Dec10 06:09] overlayfs: idmapped layers are currently not supported
	[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 08:10:57 up  6:53,  0 user,  load average: 1.03, 1.40, 1.42
	Linux no-preload-587009 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:10:53 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:10:54 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1569.
	Dec 10 08:10:54 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:10:54 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:10:54 no-preload-587009 kubelet[10162]: E1210 08:10:54.591913   10162 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:10:54 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:10:54 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:10:55 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1570.
	Dec 10 08:10:55 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:10:55 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:10:55 no-preload-587009 kubelet[10167]: E1210 08:10:55.341182   10167 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:10:55 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:10:55 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:10:56 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1571.
	Dec 10 08:10:56 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:10:56 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:10:56 no-preload-587009 kubelet[10185]: E1210 08:10:56.116925   10185 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:10:56 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:10:56 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:10:56 no-preload-587009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1572.
	Dec 10 08:10:56 no-preload-587009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:10:56 no-preload-587009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:10:56 no-preload-587009 kubelet[10230]: E1210 08:10:56.867045   10230 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:10:56 no-preload-587009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:10:56 no-preload-587009 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-587009 -n no-preload-587009: exit status 2 (347.077031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-587009" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (270.88s)

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.68
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 3.9
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.16
18 TestDownloadOnly/v1.34.2/DeleteAll 0.35
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.35.0-beta.0/json-events 5.21
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 154.65
38 TestAddons/serial/Volcano 43.58
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 9.85
44 TestAddons/parallel/Registry 15.79
45 TestAddons/parallel/RegistryCreds 0.74
46 TestAddons/parallel/Ingress 19.07
47 TestAddons/parallel/InspektorGadget 10.81
48 TestAddons/parallel/MetricsServer 6.97
50 TestAddons/parallel/CSI 39.73
51 TestAddons/parallel/Headlamp 17.06
52 TestAddons/parallel/CloudSpanner 5.58
53 TestAddons/parallel/LocalPath 51.06
54 TestAddons/parallel/NvidiaDevicePlugin 5.52
55 TestAddons/parallel/Yakd 11
57 TestAddons/StoppedEnableDisable 12.34
58 TestCertOptions 47.29
59 TestCertExpiration 235.36
61 TestForceSystemdFlag 33.67
62 TestForceSystemdEnv 34.93
63 TestDockerEnvContainerd 48.49
67 TestErrorSpam/setup 33.08
68 TestErrorSpam/start 0.76
69 TestErrorSpam/status 1.14
70 TestErrorSpam/pause 1.77
71 TestErrorSpam/unpause 1.75
72 TestErrorSpam/stop 1.64
75 TestFunctional/serial/CopySyncFile 0.01
76 TestFunctional/serial/StartWithProxy 81.55
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.24
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.42
84 TestFunctional/serial/CacheCmd/cache/add_local 1.24
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
89 TestFunctional/serial/CacheCmd/cache/delete 0.14
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
92 TestFunctional/serial/ExtraConfig 45.41
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.45
96 TestFunctional/serial/InvalidService 4.42
98 TestFunctional/parallel/ConfigCmd 0.51
99 TestFunctional/parallel/DashboardCmd 6.91
100 TestFunctional/parallel/DryRun 0.44
101 TestFunctional/parallel/InternationalLanguage 0.2
102 TestFunctional/parallel/StatusCmd 1.2
106 TestFunctional/parallel/ServiceCmdConnect 8.6
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 20.09
110 TestFunctional/parallel/SSHCmd 0.76
111 TestFunctional/parallel/CpCmd 2.04
113 TestFunctional/parallel/FileSync 0.35
114 TestFunctional/parallel/CertSync 2.21
118 TestFunctional/parallel/NodeLabels 0.12
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
122 TestFunctional/parallel/License 0.33
123 TestFunctional/parallel/Version/short 0.09
124 TestFunctional/parallel/Version/components 1.52
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.02
130 TestFunctional/parallel/ImageCommands/Setup 0.72
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.4
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
138 TestFunctional/parallel/ProfileCmd/profile_list 0.49
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.47
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.92
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
156 TestFunctional/parallel/MountCmd/any-port 8.32
157 TestFunctional/parallel/ServiceCmd/List 0.7
158 TestFunctional/parallel/ServiceCmd/JSONOutput 0.74
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
160 TestFunctional/parallel/ServiceCmd/Format 0.39
161 TestFunctional/parallel/ServiceCmd/URL 0.41
162 TestFunctional/parallel/MountCmd/specific-port 2.49
163 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
164 TestFunctional/delete_echo-server_images 0.05
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.31
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.83
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.11
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.95
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.03
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.45
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.73
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.23
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.66
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.53
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.27
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.42
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.38
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.75
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.81
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.52
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.39
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.25
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.13
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.04
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.34
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.45
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.68
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.39
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.01
264 TestMultiControlPlane/serial/StartCluster 177.8
265 TestMultiControlPlane/serial/DeployApp 6.93
266 TestMultiControlPlane/serial/PingHostFromPods 1.58
267 TestMultiControlPlane/serial/AddWorkerNode 57.64
268 TestMultiControlPlane/serial/NodeLabels 0.09
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
270 TestMultiControlPlane/serial/CopyFile 19.81
271 TestMultiControlPlane/serial/StopSecondaryNode 13
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
273 TestMultiControlPlane/serial/RestartSecondaryNode 13.69
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.48
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 110.91
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.09
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
278 TestMultiControlPlane/serial/StopCluster 36.33
279 TestMultiControlPlane/serial/RestartCluster 60.09
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
281 TestMultiControlPlane/serial/AddSecondaryNode 53.41
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
287 TestJSONOutput/start/Command 52
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.74
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.64
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.94
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 40.62
313 TestKicCustomNetwork/use_default_bridge_network 35.21
314 TestKicExistingNetwork 37.77
315 TestKicCustomSubnet 35.57
316 TestKicStaticIP 37.7
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 71.26
321 TestMountStart/serial/StartWithMountFirst 8.36
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.34
324 TestMountStart/serial/VerifyMountSecond 0.26
325 TestMountStart/serial/DeleteFirst 1.73
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 7.42
329 TestMountStart/serial/VerifyMountPostStop 0.26
332 TestMultiNode/serial/FreshStart2Nodes 76.31
333 TestMultiNode/serial/DeployApp2Nodes 5.55
334 TestMultiNode/serial/PingHostFrom2Pods 1.03
335 TestMultiNode/serial/AddNode 59.35
336 TestMultiNode/serial/MultiNodeLabels 0.1
337 TestMultiNode/serial/ProfileList 0.72
338 TestMultiNode/serial/CopyFile 10.42
339 TestMultiNode/serial/StopNode 2.42
340 TestMultiNode/serial/StartAfterStop 7.76
341 TestMultiNode/serial/RestartKeepsNodes 79.5
342 TestMultiNode/serial/DeleteNode 5.66
343 TestMultiNode/serial/StopMultiNode 24.19
344 TestMultiNode/serial/RestartMultiNode 57.88
345 TestMultiNode/serial/ValidateNameConflict 34.67
350 TestPreload 116.52
352 TestScheduledStopUnix 108.2
355 TestInsufficientStorage 12.65
356 TestRunningBinaryUpgrade 314.5
359 TestMissingContainerUpgrade 123.32
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
362 TestNoKubernetes/serial/StartWithK8s 46.66
363 TestNoKubernetes/serial/StartWithStopK8s 24.2
364 TestNoKubernetes/serial/Start 7.07
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
367 TestNoKubernetes/serial/ProfileList 0.72
368 TestNoKubernetes/serial/Stop 1.39
369 TestNoKubernetes/serial/StartNoArgs 7.16
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
371 TestStoppedBinaryUpgrade/Setup 1.22
372 TestStoppedBinaryUpgrade/Upgrade 308.4
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.09
382 TestPause/serial/Start 50.65
383 TestPause/serial/SecondStartNoReconfiguration 6.22
384 TestPause/serial/Pause 0.74
385 TestPause/serial/VerifyStatus 0.33
386 TestPause/serial/Unpause 0.63
387 TestPause/serial/PauseAgain 0.87
388 TestPause/serial/DeletePaused 2.82
389 TestPause/serial/VerifyDeletedResources 0.38
397 TestNetworkPlugins/group/false 3.66
402 TestStartStop/group/old-k8s-version/serial/FirstStart 56.7
403 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
404 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
405 TestStartStop/group/old-k8s-version/serial/Stop 12.15
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
407 TestStartStop/group/old-k8s-version/serial/SecondStart 56.04
408 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
409 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
410 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
411 TestStartStop/group/old-k8s-version/serial/Pause 3.19
413 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.93
415 TestStartStop/group/embed-certs/serial/FirstStart 53.62
416 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.47
417 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.29
418 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.32
419 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
420 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 59.52
421 TestStartStop/group/embed-certs/serial/DeployApp 9.52
422 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.35
423 TestStartStop/group/embed-certs/serial/Stop 12.69
424 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
425 TestStartStop/group/embed-certs/serial/SecondStart 50.32
426 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
427 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
428 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
429 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.09
432 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
433 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
434 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
435 TestStartStop/group/embed-certs/serial/Pause 4.92
440 TestStartStop/group/newest-cni/serial/DeployApp 0
442 TestStartStop/group/no-preload/serial/Stop 1.3
443 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
445 TestStartStop/group/newest-cni/serial/Stop 1.32
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
453 TestNetworkPlugins/group/auto/Start 80.52
454 TestNetworkPlugins/group/auto/KubeletFlags 0.3
455 TestNetworkPlugins/group/auto/NetCatPod 11.27
456 TestNetworkPlugins/group/auto/DNS 0.19
457 TestNetworkPlugins/group/auto/Localhost 0.14
458 TestNetworkPlugins/group/auto/HairPin 0.18
459 TestNetworkPlugins/group/flannel/Start 64.41
460 TestNetworkPlugins/group/flannel/ControllerPod 6
461 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
462 TestNetworkPlugins/group/flannel/NetCatPod 10.25
463 TestNetworkPlugins/group/flannel/DNS 0.18
464 TestNetworkPlugins/group/flannel/Localhost 0.15
465 TestNetworkPlugins/group/flannel/HairPin 0.14
466 TestNetworkPlugins/group/calico/Start 57.01
467 TestNetworkPlugins/group/calico/ControllerPod 6
468 TestNetworkPlugins/group/calico/KubeletFlags 0.34
469 TestNetworkPlugins/group/calico/NetCatPod 9.26
470 TestNetworkPlugins/group/calico/DNS 0.23
471 TestNetworkPlugins/group/calico/Localhost 0.16
472 TestNetworkPlugins/group/calico/HairPin 0.16
473 TestNetworkPlugins/group/custom-flannel/Start 59.1
474 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
475 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.27
476 TestNetworkPlugins/group/custom-flannel/DNS 0.18
477 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
478 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
479 TestNetworkPlugins/group/kindnet/Start 82.99
480 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
481 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
482 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
483 TestNetworkPlugins/group/kindnet/DNS 0.18
484 TestNetworkPlugins/group/kindnet/Localhost 0.15
485 TestNetworkPlugins/group/kindnet/HairPin 0.14
486 TestNetworkPlugins/group/bridge/Start 74.23
488 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
489 TestNetworkPlugins/group/bridge/NetCatPod 10.32
490 TestNetworkPlugins/group/bridge/DNS 0.17
491 TestNetworkPlugins/group/bridge/Localhost 0.16
492 TestNetworkPlugins/group/bridge/HairPin 0.15
493 TestNetworkPlugins/group/enable-default-cni/Start 71.18
494 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
495 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.27
496 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
497 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
498 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
x
+
TestDownloadOnly/v1.28.0/json-events (5.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-281527 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-281527 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.67551657s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 06:12:27.315477  786751 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1210 06:12:27.315552  786751 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-281527
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-281527: exit status 85 (98.260114ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-281527 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-281527 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:12:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:12:21.685021  786756 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:21.685232  786756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:21.685261  786756 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:21.685281  786756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:21.685562  786756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	W1210 06:12:21.685742  786756 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22089-784887/.minikube/config/config.json: open /home/jenkins/minikube-integration/22089-784887/.minikube/config/config.json: no such file or directory
	I1210 06:12:21.686236  786756 out.go:368] Setting JSON to true
	I1210 06:12:21.687159  786756 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17666,"bootTime":1765329476,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:12:21.687261  786756 start.go:143] virtualization:  
	I1210 06:12:21.692656  786756 out.go:99] [download-only-281527] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:12:21.692894  786756 notify.go:221] Checking for updates...
	W1210 06:12:21.692839  786756 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 06:12:21.696359  786756 out.go:171] MINIKUBE_LOCATION=22089
	I1210 06:12:21.699846  786756 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:12:21.703118  786756 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:12:21.706187  786756 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:12:21.709423  786756 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 06:12:21.715552  786756 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 06:12:21.715803  786756 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:12:21.751354  786756 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:12:21.751464  786756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:12:21.804949  786756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 06:12:21.795630102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:12:21.805069  786756 docker.go:319] overlay module found
	I1210 06:12:21.808181  786756 out.go:99] Using the docker driver based on user configuration
	I1210 06:12:21.808221  786756 start.go:309] selected driver: docker
	I1210 06:12:21.808228  786756 start.go:927] validating driver "docker" against <nil>
	I1210 06:12:21.808347  786756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:12:21.860887  786756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 06:12:21.852070162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:12:21.861047  786756 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:12:21.861360  786756 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 06:12:21.861531  786756 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 06:12:21.864834  786756 out.go:171] Using Docker driver with root privileges
	I1210 06:12:21.867897  786756 cni.go:84] Creating CNI manager for ""
	I1210 06:12:21.867969  786756 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:12:21.867982  786756 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:12:21.868073  786756 start.go:353] cluster config:
	{Name:download-only-281527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-281527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:12:21.871171  786756 out.go:99] Starting "download-only-281527" primary control-plane node in "download-only-281527" cluster
	I1210 06:12:21.871193  786756 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:12:21.874229  786756 out.go:99] Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:12:21.874268  786756 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1210 06:12:21.874446  786756 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:12:21.891284  786756 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 06:12:21.891487  786756 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory
	I1210 06:12:21.891601  786756 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 06:12:21.926113  786756 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:12:21.926139  786756 cache.go:65] Caching tarball of preloaded images
	I1210 06:12:21.926311  786756 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1210 06:12:21.929762  786756 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 06:12:21.929804  786756 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1210 06:12:22.018137  786756 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1210 06:12:22.018274  786756 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:12:26.658639  786756 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1210 06:12:26.659053  786756 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/download-only-281527/config.json ...
	I1210 06:12:26.659091  786756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/download-only-281527/config.json: {Name:mkb5a1d6e061c765eb93f10338efee9416e3e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:12:26.659290  786756 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1210 06:12:26.659512  786756 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-281527 host does not exist
	  To start a cluster, run: "minikube start -p download-only-281527"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-281527
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-961212 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-961212 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.900814459s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1210 06:12:31.670142  786751 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1210 06:12:31.670177  786751 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-961212
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-961212: exit status 85 (158.783797ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-281527 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-281527 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ delete  │ -p download-only-281527                                                                                                                                                               │ download-only-281527 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ start   │ -o=json --download-only -p download-only-961212 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-961212 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:12:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:12:27.813112  786958 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:27.813328  786958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:27.813358  786958 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:27.813378  786958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:27.813699  786958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:12:27.814171  786958 out.go:368] Setting JSON to true
	I1210 06:12:27.815088  786958 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17672,"bootTime":1765329476,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:12:27.815191  786958 start.go:143] virtualization:  
	I1210 06:12:27.818455  786958 out.go:99] [download-only-961212] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:12:27.818673  786958 notify.go:221] Checking for updates...
	I1210 06:12:27.821838  786958 out.go:171] MINIKUBE_LOCATION=22089
	I1210 06:12:27.825320  786958 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:12:27.828261  786958 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:12:27.831415  786958 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:12:27.834351  786958 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 06:12:27.840265  786958 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 06:12:27.840607  786958 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:12:27.875360  786958 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:12:27.875491  786958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:12:27.932030  786958 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-10 06:12:27.922958197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:12:27.932145  786958 docker.go:319] overlay module found
	I1210 06:12:27.935253  786958 out.go:99] Using the docker driver based on user configuration
	I1210 06:12:27.935298  786958 start.go:309] selected driver: docker
	I1210 06:12:27.935317  786958 start.go:927] validating driver "docker" against <nil>
	I1210 06:12:27.935429  786958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:12:27.993356  786958 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-10 06:12:27.983994003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:12:27.993511  786958 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:12:27.993774  786958 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 06:12:27.993927  786958 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 06:12:27.997074  786958 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-961212 host does not exist
	  To start a cluster, run: "minikube start -p download-only-961212"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-961212
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (5.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-930199 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-930199 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.213900061s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (5.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1210 06:12:37.623167  786751 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1210 06:12:37.623204  786751 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-930199
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-930199: exit status 85 (89.898654ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-281527 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-281527 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ delete  │ -p download-only-281527                                                                                                                                                                      │ download-only-281527 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ start   │ -o=json --download-only -p download-only-961212 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-961212 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ delete  │ -p download-only-961212                                                                                                                                                                      │ download-only-961212 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ start   │ -o=json --download-only -p download-only-930199 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-930199 │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:12:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:12:32.451191  787158 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:32.451408  787158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:32.451435  787158 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:32.451456  787158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:32.451904  787158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:12:32.452528  787158 out.go:368] Setting JSON to true
	I1210 06:12:32.453978  787158 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17677,"bootTime":1765329476,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:12:32.454108  787158 start.go:143] virtualization:  
	I1210 06:12:32.499908  787158 out.go:99] [download-only-930199] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:12:32.500154  787158 notify.go:221] Checking for updates...
	I1210 06:12:32.532389  787158 out.go:171] MINIKUBE_LOCATION=22089
	I1210 06:12:32.563586  787158 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:12:32.605516  787158 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:12:32.644303  787158 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:12:32.676108  787158 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 06:12:32.733262  787158 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 06:12:32.733576  787158 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:12:32.756213  787158 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:12:32.756331  787158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:12:32.816892  787158 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 06:12:32.808051515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:12:32.817004  787158 docker.go:319] overlay module found
	I1210 06:12:32.829009  787158 out.go:99] Using the docker driver based on user configuration
	I1210 06:12:32.829057  787158 start.go:309] selected driver: docker
	I1210 06:12:32.829065  787158 start.go:927] validating driver "docker" against <nil>
	I1210 06:12:32.829189  787158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:12:32.884287  787158 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 06:12:32.875148843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:12:32.884450  787158 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:12:32.884725  787158 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 06:12:32.884870  787158 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 06:12:32.910823  787158 out.go:171] Using Docker driver with root privileges
	I1210 06:12:32.940102  787158 cni.go:84] Creating CNI manager for ""
	I1210 06:12:32.940189  787158 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:12:32.940205  787158 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:12:32.940288  787158 start.go:353] cluster config:
	{Name:download-only-930199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-930199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:12:32.972598  787158 out.go:99] Starting "download-only-930199" primary control-plane node in "download-only-930199" cluster
	I1210 06:12:32.972632  787158 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:12:32.998678  787158 out.go:99] Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:12:32.998731  787158 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:12:32.998787  787158 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:12:33.018153  787158 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 06:12:33.018296  787158 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory
	I1210 06:12:33.018316  787158 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory, skipping pull
	I1210 06:12:33.018321  787158 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in cache, skipping pull
	I1210 06:12:33.018328  787158 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca as a tarball
	I1210 06:12:33.049120  787158 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:12:33.049149  787158 cache.go:65] Caching tarball of preloaded images
	I1210 06:12:33.049359  787158 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:12:33.079618  787158 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1210 06:12:33.079661  787158 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1210 06:12:33.164361  787158 preload.go:295] Got checksum from GCS API "4ead9b9dbba82a7ecb06a001f1ffeaf3"
	I1210 06:12:33.164432  787158 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:4ead9b9dbba82a7ecb06a001f1ffeaf3 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1210 06:12:37.021915  787158 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1210 06:12:37.022364  787158 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/download-only-930199/config.json ...
	I1210 06:12:37.022413  787158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/download-only-930199/config.json: {Name:mk7ddeea3645f9e7bba9fa19a6c711e53c9db42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:12:37.022671  787158 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1210 06:12:37.022955  787158 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22089-784887/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-930199 host does not exist
	  To start a cluster, run: "minikube start -p download-only-930199"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-930199
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 06:12:38.926386  786751 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-929825 --alsologtostderr --binary-mirror http://127.0.0.1:38707 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-929825" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-929825
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-868996
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-868996: exit status 85 (71.805337ms)

                                                
                                                
-- stdout --
	* Profile "addons-868996" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-868996"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-868996
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-868996: exit status 85 (68.724631ms)

                                                
                                                
-- stdout --
	* Profile "addons-868996" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-868996"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (154.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-868996 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-868996 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m34.650280556s)
--- PASS: TestAddons/Setup (154.65s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.58s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 59.426527ms
addons_test.go:878: volcano-admission stabilized in 59.488706ms
addons_test.go:870: volcano-scheduler stabilized in 60.593919ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-ljwzk" [cff12285-9d07-4342-8d46-9a963c8197eb] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003405303s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-2wgsp" [bafbe441-7e58-4bfe-bcd6-883f157fbbb6] Pending / Ready:ContainersNotReady (containers with unready status: [admission]) / ContainersReady:ContainersNotReady (containers with unready status: [admission])
helpers_test.go:353: "volcano-admission-6c447bd768-2wgsp" [bafbe441-7e58-4bfe-bcd6-883f157fbbb6] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 8.003946505s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-q9jfl" [21a15a0f-ca23-4758-9930-1e3060968082] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003673178s
addons_test.go:905: (dbg) Run:  kubectl --context addons-868996 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-868996 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-868996 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [b315050e-272d-4b46-a127-38b6aab864c9] Pending
helpers_test.go:353: "test-job-nginx-0" [b315050e-272d-4b46-a127-38b6aab864c9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [b315050e-272d-4b46-a127-38b6aab864c9] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003503859s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-868996 addons disable volcano --alsologtostderr -v=1: (11.938806442s)
--- PASS: TestAddons/serial/Volcano (43.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-868996 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-868996 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-868996 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-868996 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [76e81625-ba13-4e4c-9e48-159ecab5dd37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [76e81625-ba13-4e4c-9e48-159ecab5dd37] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004396902s
addons_test.go:696: (dbg) Run:  kubectl --context addons-868996 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-868996 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-868996 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-868996 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 4.475735ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-kdzvb" [9e47701b-836a-4433-a94e-3d04fa822af2] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003521379s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-dzzd5" [890dab07-f414-41e9-8f84-a90842b5b021] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003511407s
addons_test.go:394: (dbg) Run:  kubectl --context addons-868996 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-868996 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-868996 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.797496868s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 ip
2025/12/10 06:16:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.79s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.191941ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-868996
addons_test.go:334: (dbg) Run:  kubectl --context addons-868996 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-868996 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-868996 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-868996 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [75f2f96f-347b-4bf2-97ca-3afa7126f158] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [75f2f96f-347b-4bf2-97ca-3afa7126f158] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003454011s
I1210 06:17:00.579711  786751 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-868996 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-868996 addons disable ingress-dns --alsologtostderr -v=1: (1.435452602s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-868996 addons disable ingress --alsologtostderr -v=1: (7.855094399s)
--- PASS: TestAddons/parallel/Ingress (19.07s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-8krd2" [816da422-49de-41f6-8cae-ae94449a0370] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005079045s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-868996 addons disable inspektor-gadget --alsologtostderr -v=1: (5.803242674s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.032549ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-nh6sx" [52016a94-6239-45d5-8506-a82c7b8cddc5] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003566123s
addons_test.go:465: (dbg) Run:  kubectl --context addons-868996 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 06:16:32.882908  786751 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 06:16:32.902289  786751 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 06:16:32.902317  786751 kapi.go:107] duration metric: took 23.105522ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 23.116017ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-868996 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-868996 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [3a904d60-7aef-4968-8dbc-4b2346937119] Pending
helpers_test.go:353: "task-pv-pod" [3a904d60-7aef-4968-8dbc-4b2346937119] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [3a904d60-7aef-4968-8dbc-4b2346937119] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003780085s
addons_test.go:574: (dbg) Run:  kubectl --context addons-868996 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-868996 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-868996 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-868996 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-868996 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-868996 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-868996 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [d221735c-62fe-4b0c-bfc0-55e02bc123a7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [d221735c-62fe-4b0c-bfc0-55e02bc123a7] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005185524s
addons_test.go:616: (dbg) Run:  kubectl --context addons-868996 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-868996 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-868996 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-868996 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.905448884s)
--- PASS: TestAddons/parallel/CSI (39.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-868996 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-868996 --alsologtostderr -v=1: (1.116434341s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-g6v5v" [0192e3f9-5603-4126-8dd0-7a0183af493d] Pending
helpers_test.go:353: "headlamp-dfcdc64b-g6v5v" [0192e3f9-5603-4126-8dd0-7a0183af493d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-g6v5v" [0192e3f9-5603-4126-8dd0-7a0183af493d] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-g6v5v" [0192e3f9-5603-4126-8dd0-7a0183af493d] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00345673s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-868996 addons disable headlamp --alsologtostderr -v=1: (5.936909131s)
--- PASS: TestAddons/parallel/Headlamp (17.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-k4sjk" [0ddf1ebe-17e1-4be2-b341-a582faa68822] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003569458s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-868996 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-868996 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-868996 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [760b9beb-8d63-4d33-8a75-6e5d2be1ba50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [760b9beb-8d63-4d33-8a75-6e5d2be1ba50] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [760b9beb-8d63-4d33-8a75-6e5d2be1ba50] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002962826s
addons_test.go:969: (dbg) Run:  kubectl --context addons-868996 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 ssh "cat /opt/local-path-provisioner/pvc-96885002-802c-4f8f-9685-b326ee9000bb_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-868996 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-868996 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-868996 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.908295648s)
--- PASS: TestAddons/parallel/LocalPath (51.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-pbrc9" [b76dcdc2-359d-4898-ac0f-6692d6358f94] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003344921s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-jrrtj" [ae5064cf-f563-437a-8905-7bc7be6cbbaf] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003797323s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-868996 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-868996 addons disable yakd --alsologtostderr -v=1: (5.996293872s)
--- PASS: TestAddons/parallel/Yakd (11.00s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-868996
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-868996: (12.072073095s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-868996
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-868996
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-868996
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (47.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-061301 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1210 07:35:14.424003  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:24.254606  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-061301 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (43.749525808s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-061301 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-061301 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-061301 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-061301" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-061301
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-061301: (2.560695429s)
--- PASS: TestCertOptions (47.29s)

                                                
                                    
x
+
TestCertExpiration (235.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-611923 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-611923 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.908468511s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-611923 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-611923 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.627722305s)
helpers_test.go:176: Cleaning up "cert-expiration-611923" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-611923
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-611923: (2.826923847s)
--- PASS: TestCertExpiration (235.36s)

                                                
                                    
x
+
TestForceSystemdFlag (33.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-139922 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1210 07:33:58.856978  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-139922 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.237889277s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-139922 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-139922" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-139922
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-139922: (2.125022756s)
--- PASS: TestForceSystemdFlag (33.67s)

                                                
                                    
x
+
TestForceSystemdEnv (34.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-355914 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-355914 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.04012214s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-355914 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-355914" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-355914
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-355914: (2.500918065s)
--- PASS: TestForceSystemdEnv (34.93s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.49s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-810667 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-810667 --driver=docker  --container-runtime=containerd: (32.230006817s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-810667"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-810667": (1.084118243s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-p0bburOiAJ9E/agent.806086" SSH_AGENT_PID="806087" DOCKER_HOST=ssh://docker@127.0.0.1:33515 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-p0bburOiAJ9E/agent.806086" SSH_AGENT_PID="806087" DOCKER_HOST=ssh://docker@127.0.0.1:33515 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-p0bburOiAJ9E/agent.806086" SSH_AGENT_PID="806087" DOCKER_HOST=ssh://docker@127.0.0.1:33515 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.270441304s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-p0bburOiAJ9E/agent.806086" SSH_AGENT_PID="806087" DOCKER_HOST=ssh://docker@127.0.0.1:33515 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-810667" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-810667
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-810667: (2.479898709s)
--- PASS: TestDockerEnvContainerd (48.49s)

                                                
                                    
x
+
TestErrorSpam/setup (33.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-423003 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-423003 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-423003 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-423003 --driver=docker  --container-runtime=containerd: (33.080041136s)
--- PASS: TestErrorSpam/setup (33.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 stop: (1.442402986s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-423003 --log_dir /tmp/nospam-423003 stop
--- PASS: TestErrorSpam/stop (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634209 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1210 06:20:14.425304  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:14.432089  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:14.443407  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:14.464755  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:14.506133  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:14.587534  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:14.749024  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:15.070643  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:15.712689  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:16.994057  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:19.555366  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:24.677006  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:34.918502  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:20:55.400489  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-634209 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m21.548848523s)
--- PASS: TestFunctional/serial/StartWithProxy (81.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 06:21:20.227550  786751 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634209 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-634209 --alsologtostderr -v=8: (7.235806618s)
functional_test.go:678: soft start took 7.239455052s for "functional-634209" cluster.
I1210 06:21:27.463690  786751 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-634209 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 cache add registry.k8s.io/pause:3.1: (1.266950464s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 cache add registry.k8s.io/pause:3.3: (1.088180195s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 cache add registry.k8s.io/pause:latest: (1.060537008s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-634209 /tmp/TestFunctionalserialCacheCmdcacheadd_local2796511022/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 cache add minikube-local-cache-test:functional-634209
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 cache delete minikube-local-cache-test:functional-634209
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-634209
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634209 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.528176ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 kubectl -- --context functional-634209 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-634209 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634209 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 06:21:36.361998  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-634209 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.409512532s)
functional_test.go:776: restart took 45.409613547s for "functional-634209" cluster.
I1210 06:22:20.442948  786751 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (45.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-634209 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 logs: (1.470188151s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 logs --file /tmp/TestFunctionalserialLogsFileCmd3671927972/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 logs --file /tmp/TestFunctionalserialLogsFileCmd3671927972/001/logs.txt: (1.447281027s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-634209 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-634209
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-634209: exit status 115 (437.305794ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32720 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-634209 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634209 config get cpus: exit status 14 (70.721864ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634209 config get cpus: exit status 14 (80.261336ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-634209 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-634209 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 822531: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634209 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-634209 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (189.52992ms)

                                                
                                                
-- stdout --
	* [functional-634209] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:22:57.389818  821053 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:22:57.389969  821053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:22:57.389982  821053 out.go:374] Setting ErrFile to fd 2...
	I1210 06:22:57.389988  821053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:22:57.390287  821053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:22:57.390715  821053 out.go:368] Setting JSON to false
	I1210 06:22:57.391712  821053 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18302,"bootTime":1765329476,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:22:57.391788  821053 start.go:143] virtualization:  
	I1210 06:22:57.395243  821053 out.go:179] * [functional-634209] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:22:57.398325  821053 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:22:57.398402  821053 notify.go:221] Checking for updates...
	I1210 06:22:57.404070  821053 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:22:57.407022  821053 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:22:57.409861  821053 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:22:57.413102  821053 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:22:57.416006  821053 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:22:57.419367  821053 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 06:22:57.420001  821053 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:22:57.453139  821053 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:22:57.453258  821053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:22:57.510152  821053 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 06:22:57.500457978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:22:57.510259  821053 docker.go:319] overlay module found
	I1210 06:22:57.513268  821053 out.go:179] * Using the docker driver based on existing profile
	I1210 06:22:57.516093  821053 start.go:309] selected driver: docker
	I1210 06:22:57.516118  821053 start.go:927] validating driver "docker" against &{Name:functional-634209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-634209 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:22:57.516240  821053 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:22:57.519799  821053 out.go:203] 
	W1210 06:22:57.522670  821053 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 06:22:57.525496  821053 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634209 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634209 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-634209 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (203.914231ms)

                                                
                                                
-- stdout --
	* [functional-634209] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:23:03.333918  822274 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:23:03.334177  822274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:03.334202  822274 out.go:374] Setting ErrFile to fd 2...
	I1210 06:23:03.334222  822274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:03.335364  822274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:23:03.335825  822274 out.go:368] Setting JSON to false
	I1210 06:23:03.336954  822274 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18308,"bootTime":1765329476,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:23:03.337061  822274 start.go:143] virtualization:  
	I1210 06:23:03.340336  822274 out.go:179] * [functional-634209] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 06:23:03.344138  822274 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:23:03.344384  822274 notify.go:221] Checking for updates...
	I1210 06:23:03.350555  822274 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:23:03.353444  822274 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:23:03.356401  822274 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:23:03.359163  822274 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:23:03.362153  822274 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:23:03.365645  822274 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 06:23:03.366284  822274 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:23:03.392033  822274 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:23:03.392161  822274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:03.462175  822274 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 06:23:03.45130908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:23:03.462296  822274 docker.go:319] overlay module found
	I1210 06:23:03.465597  822274 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 06:23:03.468669  822274 start.go:309] selected driver: docker
	I1210 06:23:03.468698  822274 start.go:927] validating driver "docker" against &{Name:functional-634209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-634209 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:03.468826  822274 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:23:03.472427  822274 out.go:203] 
	W1210 06:23:03.475297  822274 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 06:23:03.478118  822274 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-634209 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-634209 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-lx5t2" [1b8c8d8c-d7dd-4f7f-9e7d-4ed0df293f7c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-lx5t2" [1b8c8d8c-d7dd-4f7f-9e7d-4ed0df293f7c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004660911s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30486
functional_test.go:1680: http://192.168.49.2:30486: success! body:
Request served by hello-node-connect-7d85dfc575-lx5t2

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30486
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [83380622-7169-4c59-ba3c-84a54f5b6755] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003853258s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-634209 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-634209 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-634209 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-634209 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ec411352-ea0c-494d-918b-229a5aeaf9e5] Pending
helpers_test.go:353: "sp-pod" [ec411352-ea0c-494d-918b-229a5aeaf9e5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [ec411352-ea0c-494d-918b-229a5aeaf9e5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003708468s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-634209 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-634209 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-634209 delete -f testdata/storage-provisioner/pod.yaml: (1.034383615s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-634209 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [797d5f6a-9fae-4f35-8f7f-29fb0f18a82b] Pending
helpers_test.go:353: "sp-pod" [797d5f6a-9fae-4f35-8f7f-29fb0f18a82b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003846996s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-634209 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh -n functional-634209 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 cp functional-634209:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1042390727/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh -n functional-634209 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh -n functional-634209 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/786751/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo cat /etc/test/nested/copy/786751/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/786751.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo cat /etc/ssl/certs/786751.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/786751.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo cat /usr/share/ca-certificates/786751.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7867512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo cat /etc/ssl/certs/7867512.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7867512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo cat /usr/share/ca-certificates/7867512.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-634209 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634209 ssh "sudo systemctl is-active docker": exit status 1 (372.622467ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634209 ssh "sudo systemctl is-active crio": exit status 1 (357.260116ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 version -o=json --components: (1.524284735s)
--- PASS: TestFunctional/parallel/Version/components (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-634209 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-634209
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-634209
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634209 image ls --format short --alsologtostderr:
I1210 06:23:11.340791  823893 out.go:360] Setting OutFile to fd 1 ...
I1210 06:23:11.340919  823893 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:11.340931  823893 out.go:374] Setting ErrFile to fd 2...
I1210 06:23:11.340936  823893 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:11.341349  823893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:23:11.342378  823893 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:11.342591  823893 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:11.343334  823893 cli_runner.go:164] Run: docker container inspect functional-634209 --format={{.State.Status}}
I1210 06:23:11.363814  823893 ssh_runner.go:195] Run: systemctl --version
I1210 06:23:11.363881  823893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634209
I1210 06:23:11.390156  823893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-634209/id_rsa Username:docker}
I1210 06:23:11.497871  823893 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-634209 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kicbase/echo-server               │ functional-634209  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/minikube-local-cache-test │ functional-634209  │ sha256:54106a │ 991B   │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634209 image ls --format table --alsologtostderr:
I1210 06:23:12.326585  824140 out.go:360] Setting OutFile to fd 1 ...
I1210 06:23:12.326923  824140 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:12.326955  824140 out.go:374] Setting ErrFile to fd 2...
I1210 06:23:12.326976  824140 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:12.327374  824140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:23:12.328293  824140 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:12.328505  824140 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:12.329304  824140 cli_runner.go:164] Run: docker container inspect functional-634209 --format={{.State.Status}}
I1210 06:23:12.351250  824140 ssh_runner.go:195] Run: systemctl --version
I1210 06:23:12.351306  824140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634209
I1210 06:23:12.376338  824140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-634209/id_rsa Username:docker}
I1210 06:23:12.494055  824140 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-634209 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:54106a51504f7a89ca38a9b17f1e7c790a91bdd52bce5badc4621cab1917817f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-634209"],"size":"991"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db
3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],
"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d463
70236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-634209","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b46108996
9449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"23107444"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634209 image ls --format json --alsologtostderr:
I1210 06:23:12.075487  824070 out.go:360] Setting OutFile to fd 1 ...
I1210 06:23:12.075587  824070 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:12.075599  824070 out.go:374] Setting ErrFile to fd 2...
I1210 06:23:12.075605  824070 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:12.075955  824070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:23:12.076877  824070 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:12.077024  824070 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:12.077749  824070 cli_runner.go:164] Run: docker container inspect functional-634209 --format={{.State.Status}}
I1210 06:23:12.095217  824070 ssh_runner.go:195] Run: systemctl --version
I1210 06:23:12.095276  824070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634209
I1210 06:23:12.121623  824070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-634209/id_rsa Username:docker}
I1210 06:23:12.225235  824070 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-634209 image ls --format yaml --alsologtostderr:
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-634209
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "23107444"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:54106a51504f7a89ca38a9b17f1e7c790a91bdd52bce5badc4621cab1917817f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-634209
size: "991"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634209 image ls --format yaml --alsologtostderr:
I1210 06:23:11.597943  823953 out.go:360] Setting OutFile to fd 1 ...
I1210 06:23:11.598182  823953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:11.598197  823953 out.go:374] Setting ErrFile to fd 2...
I1210 06:23:11.598202  823953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:11.598579  823953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:23:11.599420  823953 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:11.599590  823953 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:11.600226  823953 cli_runner.go:164] Run: docker container inspect functional-634209 --format={{.State.Status}}
I1210 06:23:11.618227  823953 ssh_runner.go:195] Run: systemctl --version
I1210 06:23:11.618285  823953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634209
I1210 06:23:11.636663  823953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-634209/id_rsa Username:docker}
I1210 06:23:11.741014  823953 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634209 ssh pgrep buildkitd: exit status 1 (337.448035ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr: (3.450953713s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr:
I1210 06:23:12.178352  824098 out.go:360] Setting OutFile to fd 1 ...
I1210 06:23:12.179195  824098 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:12.179212  824098 out.go:374] Setting ErrFile to fd 2...
I1210 06:23:12.179218  824098 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:12.179528  824098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:23:12.180185  824098 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:12.183053  824098 config.go:182] Loaded profile config "functional-634209": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1210 06:23:12.183664  824098 cli_runner.go:164] Run: docker container inspect functional-634209 --format={{.State.Status}}
I1210 06:23:12.201423  824098 ssh_runner.go:195] Run: systemctl --version
I1210 06:23:12.201472  824098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634209
I1210 06:23:12.221279  824098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-634209/id_rsa Username:docker}
I1210 06:23:12.334454  824098 build_images.go:162] Building image from path: /tmp/build.278747881.tar
I1210 06:23:12.334661  824098 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 06:23:12.342266  824098 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.278747881.tar
I1210 06:23:12.347032  824098 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.278747881.tar: stat -c "%s %y" /var/lib/minikube/build/build.278747881.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.278747881.tar': No such file or directory
I1210 06:23:12.347073  824098 ssh_runner.go:362] scp /tmp/build.278747881.tar --> /var/lib/minikube/build/build.278747881.tar (3072 bytes)
I1210 06:23:12.374750  824098 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.278747881
I1210 06:23:12.384495  824098 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.278747881 -xf /var/lib/minikube/build/build.278747881.tar
I1210 06:23:12.393771  824098 containerd.go:394] Building image: /var/lib/minikube/build/build.278747881
I1210 06:23:12.393853  824098 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.278747881 --local dockerfile=/var/lib/minikube/build/build.278747881 --output type=image,name=localhost/my-image:functional-634209
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c23aaa60a415e5b377a30a87b47346c000fc054501d5c7903a0101ad960c62d3 0.0s done
#8 exporting config sha256:13ebd54a916802b634ce3075862b3c5204a30722c31f7e76c489a3e408f4ff84 0.0s done
#8 naming to localhost/my-image:functional-634209 done
#8 DONE 0.2s
I1210 06:23:15.537065  824098 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.278747881 --local dockerfile=/var/lib/minikube/build/build.278747881 --output type=image,name=localhost/my-image:functional-634209: (3.143178666s)
I1210 06:23:15.537142  824098 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.278747881
I1210 06:23:15.546368  824098 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.278747881.tar
I1210 06:23:15.553954  824098 build_images.go:218] Built localhost/my-image:functional-634209 from /tmp/build.278747881.tar
I1210 06:23:15.553982  824098 build_images.go:134] succeeded building to: functional-634209
I1210 06:23:15.553987  824098 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-634209
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image load --daemon kicbase/echo-server:functional-634209 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 image load --daemon kicbase/echo-server:functional-634209 --alsologtostderr: (1.203929586s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image load --daemon kicbase/echo-server:functional-634209 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 image load --daemon kicbase/echo-server:functional-634209 --alsologtostderr: (1.111368287s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-634209
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image load --daemon kicbase/echo-server:functional-634209 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 image load --daemon kicbase/echo-server:functional-634209 --alsologtostderr: (1.092847527s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "426.584209ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "63.459114ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "520.866332ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "72.63338ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image save kicbase/echo-server:functional-634209 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-634209 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-634209 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-634209 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 819685: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-634209 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image rm kicbase/echo-server:functional-634209 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-634209 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-634209 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [e2c4c778-898f-4c9c-9c23-7d10c61eadb6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [e2c4c778-898f-4c9c-9c23-7d10c61eadb6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003721444s
I1210 06:22:44.247300  786751 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-634209
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 image save --daemon kicbase/echo-server:functional-634209 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-634209
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-634209 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.255.226 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-634209 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-634209 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-634209 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-5ckjv" [3a07a9f2-930e-4093-8697-315e4b9b1235] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-5ckjv" [3a07a9f2-930e-4093-8697-315e4b9b1235] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003835257s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdany-port3340020960/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765347777783854352" to /tmp/TestFunctionalparallelMountCmdany-port3340020960/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765347777783854352" to /tmp/TestFunctionalparallelMountCmdany-port3340020960/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765347777783854352" to /tmp/TestFunctionalparallelMountCmdany-port3340020960/001/test-1765347777783854352
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (360.629018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:22:58.146210  786751 retry.go:31] will retry after 429.233811ms: exit status 1
E1210 06:22:58.283581  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 06:22 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 06:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 06:22 test-1765347777783854352
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh cat /mount-9p/test-1765347777783854352
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-634209 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [92f14c54-7d18-42a8-be60-22069f8c8ce9] Pending
helpers_test.go:353: "busybox-mount" [92f14c54-7d18-42a8-be60-22069f8c8ce9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [92f14c54-7d18-42a8-be60-22069f8c8ce9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [92f14c54-7d18-42a8-be60-22069f8c8ce9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.023109002s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-634209 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdany-port3340020960/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 service list -o json
functional_test.go:1504: Took "740.647689ms" to run "out/minikube-linux-arm64 -p functional-634209 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32200
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32200
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdspecific-port1205426619/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (555.537731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:23:06.655819  786751 retry.go:31] will retry after 682.386102ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdspecific-port1205426619/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634209 ssh "sudo umount -f /mount-9p": exit status 1 (343.671846ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-634209 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdspecific-port1205426619/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T" /mount1: (1.011378539s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-634209 ssh "findmnt -T" /mount3
2025/12/10 06:23:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-634209 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-634209
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-634209
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-634209
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-534748 cache add registry.k8s.io/pause:3.1: (1.127627797s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-534748 cache add registry.k8s.io/pause:3.3: (1.12890865s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-534748 cache add registry.k8s.io/pause:latest: (1.05423549s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1750720304/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 cache add minikube-local-cache-test:functional-534748
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 cache delete minikube-local-cache-test:functional-534748
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-534748
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.446048ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3475115/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-534748 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3475115/001/logs.txt: (1.024473501s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 config get cpus: exit status 14 (67.908724ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 config get cpus: exit status 14 (84.681182ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-534748 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-534748 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (205.259066ms)

                                                
                                                
-- stdout --
	* [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:52:45.611323  853710 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:52:45.611572  853710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.611601  853710 out.go:374] Setting ErrFile to fd 2...
	I1210 06:52:45.611625  853710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.611903  853710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:52:45.613386  853710 out.go:368] Setting JSON to false
	I1210 06:52:45.614284  853710 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":20090,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:52:45.614381  853710 start.go:143] virtualization:  
	I1210 06:52:45.617614  853710 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:52:45.621242  853710 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:52:45.621318  853710 notify.go:221] Checking for updates...
	I1210 06:52:45.627064  853710 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:52:45.629851  853710 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:52:45.632751  853710 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:52:45.635705  853710 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:52:45.638541  853710 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:52:45.641874  853710 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:52:45.642445  853710 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:52:45.676173  853710 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:52:45.676291  853710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:52:45.739608  853710 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:45.729531947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:52:45.739718  853710 docker.go:319] overlay module found
	I1210 06:52:45.742828  853710 out.go:179] * Using the docker driver based on existing profile
	I1210 06:52:45.745628  853710 start.go:309] selected driver: docker
	I1210 06:52:45.745645  853710 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:52:45.745772  853710 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:52:45.749146  853710 out.go:203] 
	W1210 06:52:45.751961  853710 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 06:52:45.754649  853710 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-534748 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-534748 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-534748 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (189.309518ms)

                                                
                                                
-- stdout --
	* [functional-534748] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:52:45.413361  853657 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:52:45.413590  853657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.413624  853657 out.go:374] Setting ErrFile to fd 2...
	I1210 06:52:45.413647  853657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:52:45.414036  853657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:52:45.414544  853657 out.go:368] Setting JSON to false
	I1210 06:52:45.415441  853657 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":20090,"bootTime":1765329476,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 06:52:45.415541  853657 start.go:143] virtualization:  
	I1210 06:52:45.418941  853657 out.go:179] * [functional-534748] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 06:52:45.422750  853657 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:52:45.422845  853657 notify.go:221] Checking for updates...
	I1210 06:52:45.429644  853657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:52:45.432634  853657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 06:52:45.435647  853657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 06:52:45.438676  853657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:52:45.441734  853657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:52:45.445160  853657 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 06:52:45.445863  853657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:52:45.469922  853657 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:52:45.470057  853657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:52:45.530266  853657 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:52:45.520903996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:52:45.530376  853657 docker.go:319] overlay module found
	I1210 06:52:45.533530  853657 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 06:52:45.536535  853657 start.go:309] selected driver: docker
	I1210 06:52:45.536560  853657 start.go:927] validating driver "docker" against &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:52:45.536680  853657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:52:45.540178  853657 out.go:203] 
	W1210 06:52:45.543318  853657 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 06:52:45.549311  853657 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh -n functional-534748 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 cp functional-534748:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3903131200/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh -n functional-534748 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh -n functional-534748 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/786751/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo cat /etc/test/nested/copy/786751/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/786751.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo cat /etc/ssl/certs/786751.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/786751.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo cat /usr/share/ca-certificates/786751.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7867512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo cat /etc/ssl/certs/7867512.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7867512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo cat /usr/share/ca-certificates/7867512.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 ssh "sudo systemctl is-active docker": exit status 1 (266.371634ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 ssh "sudo systemctl is-active crio": exit status 1 (265.351192ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-534748 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "343.076167ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.357163ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "311.110968ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "67.599989ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4177155203/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.001602ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:52:39.026323  786751 retry.go:31] will retry after 329.691128ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4177155203/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 ssh "sudo umount -f /mount-9p": exit status 1 (317.885006ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-534748 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4177155203/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 ssh "findmnt -T" /mount1: exit status 1 (510.606707ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:52:40.935751  786751 retry.go:31] will retry after 424.518678ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-534748 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-534748 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3563118548/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-534748 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-534748
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-534748
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-534748 image ls --format short --alsologtostderr:
I1210 06:52:58.018513  855872 out.go:360] Setting OutFile to fd 1 ...
I1210 06:52:58.018658  855872 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:58.018677  855872 out.go:374] Setting ErrFile to fd 2...
I1210 06:52:58.018683  855872 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:58.018969  855872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:52:58.019627  855872 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:58.019751  855872 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:58.020262  855872 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:52:58.040173  855872 ssh_runner.go:195] Run: systemctl --version
I1210 06:52:58.040229  855872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:52:58.057286  855872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:52:58.152991  855872 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-534748 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kicbase/echo-server               │ functional-534748  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-534748  │ sha256:54106a │ 991B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-534748 image ls --format table --alsologtostderr:
I1210 06:52:58.467083  855952 out.go:360] Setting OutFile to fd 1 ...
I1210 06:52:58.467204  855952 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:58.467210  855952 out.go:374] Setting ErrFile to fd 2...
I1210 06:52:58.467216  855952 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:58.467574  855952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:52:58.468504  855952 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:58.468636  855952 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:58.469507  855952 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:52:58.487318  855952 ssh_runner.go:195] Run: systemctl --version
I1210 06:52:58.487383  855952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:52:58.505046  855952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:52:58.601115  855952 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-534748 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad
918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5
c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-534748"],"size":"2173567"},{"id":"sha256:54106a51504f7a89ca38a9b17f1e7c790a91bdd52bce5badc4621cab1917817f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-534748"],"size":"991"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id
":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-534748 image ls --format json --alsologtostderr:
I1210 06:52:58.237487  855908 out.go:360] Setting OutFile to fd 1 ...
I1210 06:52:58.237613  855908 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:58.237624  855908 out.go:374] Setting ErrFile to fd 2...
I1210 06:52:58.237629  855908 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:58.237874  855908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:52:58.238508  855908 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:58.238637  855908 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:58.239177  855908 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:52:58.258407  855908 ssh_runner.go:195] Run: systemctl --version
I1210 06:52:58.258496  855908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:52:58.280922  855908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:52:58.381106  855908 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-534748 image ls --format yaml --alsologtostderr:
- id: sha256:54106a51504f7a89ca38a9b17f1e7c790a91bdd52bce5badc4621cab1917817f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-534748
size: "991"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-534748
size: "2173567"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-534748 image ls --format yaml --alsologtostderr:
I1210 06:52:58.690526  855990 out.go:360] Setting OutFile to fd 1 ...
I1210 06:52:58.691328  855990 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:58.691362  855990 out.go:374] Setting ErrFile to fd 2...
I1210 06:52:58.691383  855990 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:58.691666  855990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:52:58.692336  855990 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:58.692509  855990 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:58.693073  855990 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:52:58.712531  855990 ssh_runner.go:195] Run: systemctl --version
I1210 06:52:58.712591  855990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:52:58.729840  855990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:52:58.824951  855990 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-534748 ssh pgrep buildkitd: exit status 1 (261.625856ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image build -t localhost/my-image:functional-534748 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-534748 image build -t localhost/my-image:functional-534748 testdata/build --alsologtostderr: (2.875453654s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-534748 image build -t localhost/my-image:functional-534748 testdata/build --alsologtostderr:
I1210 06:52:59.170197  856092 out.go:360] Setting OutFile to fd 1 ...
I1210 06:52:59.170412  856092 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:59.170444  856092 out.go:374] Setting ErrFile to fd 2...
I1210 06:52:59.170520  856092 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:52:59.170786  856092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:52:59.171484  856092 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:59.172135  856092 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:52:59.172734  856092 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:52:59.189602  856092 ssh_runner.go:195] Run: systemctl --version
I1210 06:52:59.189648  856092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:52:59.207460  856092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:52:59.300877  856092 build_images.go:162] Building image from path: /tmp/build.3496043286.tar
I1210 06:52:59.300961  856092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 06:52:59.308220  856092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3496043286.tar
I1210 06:52:59.311665  856092 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3496043286.tar: stat -c "%s %y" /var/lib/minikube/build/build.3496043286.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3496043286.tar': No such file or directory
I1210 06:52:59.311693  856092 ssh_runner.go:362] scp /tmp/build.3496043286.tar --> /var/lib/minikube/build/build.3496043286.tar (3072 bytes)
I1210 06:52:59.328924  856092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3496043286
I1210 06:52:59.336918  856092 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3496043286 -xf /var/lib/minikube/build/build.3496043286.tar
I1210 06:52:59.344801  856092 containerd.go:394] Building image: /var/lib/minikube/build/build.3496043286
I1210 06:52:59.344912  856092 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3496043286 --local dockerfile=/var/lib/minikube/build/build.3496043286 --output type=image,name=localhost/my-image:functional-534748
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d44e3604bf299fe8dab0ff6478d48bc3b4d733550a4eef683dc893016bc3eb9e 0.0s done
#8 exporting config sha256:9f6f0759e744bfcad5ed76b52291b2be156d76fd27a253fc9806360f77556a11 0.0s done
#8 naming to localhost/my-image:functional-534748 done
#8 DONE 0.2s
I1210 06:53:01.970841  856092 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3496043286 --local dockerfile=/var/lib/minikube/build/build.3496043286 --output type=image,name=localhost/my-image:functional-534748: (2.625896736s)
I1210 06:53:01.970936  856092 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3496043286
I1210 06:53:01.979049  856092 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3496043286.tar
I1210 06:53:01.987054  856092 build_images.go:218] Built localhost/my-image:functional-534748 from /tmp/build.3496043286.tar
I1210 06:53:01.987082  856092 build_images.go:134] succeeded building to: functional-534748
I1210 06:53:01.987087  856092 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-534748
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image load --daemon kicbase/echo-server:functional-534748 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image load --daemon kicbase/echo-server:functional-534748 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-534748
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image load --daemon kicbase/echo-server:functional-534748 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image save kicbase/echo-server:functional-534748 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image rm kicbase/echo-server:functional-534748 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-534748
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 image save --daemon kicbase/echo-server:functional-534748 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-534748
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-534748 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-534748
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-534748
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-534748
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1210 06:55:14.424492  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:24.254631  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:24.261014  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:24.272392  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:24.293757  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:24.335183  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:24.416502  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:24.578154  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:24.899736  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:25.541796  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:26.823341  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:29.386181  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:34.507736  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:55:44.749090  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:05.230949  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:46.192946  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m56.898188589s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (177.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 kubectl -- rollout status deployment/busybox: (4.022045454s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-bms8n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-rpb4n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-wvgrb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-bms8n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-rpb4n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-wvgrb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-bms8n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-rpb4n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-wvgrb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-bms8n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-bms8n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-rpb4n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-rpb4n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-wvgrb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 kubectl -- exec busybox-7b57f96db7-wvgrb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 node add --alsologtostderr -v 5
E1210 06:57:35.786371  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:58:08.114638  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 node add --alsologtostderr -v 5: (56.611338905s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5: (1.029831122s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-717599 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.031797054s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 status --output json --alsologtostderr -v 5: (1.079288499s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp testdata/cp-test.txt ha-717599:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1345548357/001/cp-test_ha-717599.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599:/home/docker/cp-test.txt ha-717599-m02:/home/docker/cp-test_ha-717599_ha-717599-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m02 "sudo cat /home/docker/cp-test_ha-717599_ha-717599-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599:/home/docker/cp-test.txt ha-717599-m03:/home/docker/cp-test_ha-717599_ha-717599-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m03 "sudo cat /home/docker/cp-test_ha-717599_ha-717599-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599:/home/docker/cp-test.txt ha-717599-m04:/home/docker/cp-test_ha-717599_ha-717599-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m04 "sudo cat /home/docker/cp-test_ha-717599_ha-717599-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp testdata/cp-test.txt ha-717599-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1345548357/001/cp-test_ha-717599-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m02:/home/docker/cp-test.txt ha-717599:/home/docker/cp-test_ha-717599-m02_ha-717599.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599 "sudo cat /home/docker/cp-test_ha-717599-m02_ha-717599.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m02:/home/docker/cp-test.txt ha-717599-m03:/home/docker/cp-test_ha-717599-m02_ha-717599-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m03 "sudo cat /home/docker/cp-test_ha-717599-m02_ha-717599-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m02:/home/docker/cp-test.txt ha-717599-m04:/home/docker/cp-test_ha-717599-m02_ha-717599-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m04 "sudo cat /home/docker/cp-test_ha-717599-m02_ha-717599-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp testdata/cp-test.txt ha-717599-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1345548357/001/cp-test_ha-717599-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m03:/home/docker/cp-test.txt ha-717599:/home/docker/cp-test_ha-717599-m03_ha-717599.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599 "sudo cat /home/docker/cp-test_ha-717599-m03_ha-717599.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m03:/home/docker/cp-test.txt ha-717599-m02:/home/docker/cp-test_ha-717599-m03_ha-717599-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m02 "sudo cat /home/docker/cp-test_ha-717599-m03_ha-717599-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m03:/home/docker/cp-test.txt ha-717599-m04:/home/docker/cp-test_ha-717599-m03_ha-717599-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m04 "sudo cat /home/docker/cp-test_ha-717599-m03_ha-717599-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp testdata/cp-test.txt ha-717599-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1345548357/001/cp-test_ha-717599-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m04:/home/docker/cp-test.txt ha-717599:/home/docker/cp-test_ha-717599-m04_ha-717599.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599 "sudo cat /home/docker/cp-test_ha-717599-m04_ha-717599.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m04:/home/docker/cp-test.txt ha-717599-m02:/home/docker/cp-test_ha-717599-m04_ha-717599-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m02 "sudo cat /home/docker/cp-test_ha-717599-m04_ha-717599-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 cp ha-717599-m04:/home/docker/cp-test.txt ha-717599-m03:/home/docker/cp-test_ha-717599-m04_ha-717599-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 ssh -n ha-717599-m03 "sudo cat /home/docker/cp-test_ha-717599-m04_ha-717599-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 node stop m02 --alsologtostderr -v 5: (12.166107038s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5: exit status 7 (830.454371ms)

                                                
                                                
-- stdout --
	ha-717599
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-717599-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-717599-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-717599-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:59:05.488160  873610 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:59:05.488623  873610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:59:05.488646  873610 out.go:374] Setting ErrFile to fd 2...
	I1210 06:59:05.488654  873610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:59:05.489513  873610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 06:59:05.489883  873610 out.go:368] Setting JSON to false
	I1210 06:59:05.489978  873610 mustload.go:66] Loading cluster: ha-717599
	I1210 06:59:05.493754  873610 notify.go:221] Checking for updates...
	I1210 06:59:05.493761  873610 config.go:182] Loaded profile config "ha-717599": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 06:59:05.493928  873610 status.go:174] checking status of ha-717599 ...
	I1210 06:59:05.494559  873610 cli_runner.go:164] Run: docker container inspect ha-717599 --format={{.State.Status}}
	I1210 06:59:05.515798  873610 status.go:371] ha-717599 host status = "Running" (err=<nil>)
	I1210 06:59:05.515820  873610 host.go:66] Checking if "ha-717599" exists ...
	I1210 06:59:05.516138  873610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-717599
	I1210 06:59:05.547992  873610 host.go:66] Checking if "ha-717599" exists ...
	I1210 06:59:05.548305  873610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:59:05.548360  873610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-717599
	I1210 06:59:05.569511  873610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/ha-717599/id_rsa Username:docker}
	I1210 06:59:05.678715  873610 ssh_runner.go:195] Run: systemctl --version
	I1210 06:59:05.685856  873610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:59:05.699975  873610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:59:05.782832  873610 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-10 06:59:05.772717052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:59:05.783370  873610 kubeconfig.go:125] found "ha-717599" server: "https://192.168.49.254:8443"
	I1210 06:59:05.783395  873610 api_server.go:166] Checking apiserver status ...
	I1210 06:59:05.783440  873610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:59:05.797959  873610 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup
	I1210 06:59:05.806422  873610 api_server.go:182] apiserver freezer: "8:freezer:/docker/69b441d395106a405a7d991c280f2a18e0914b39a334eaa187ce579acc77e0a4/kubepods/burstable/pod66468a433bebc9a0a15d05b0ef2855e3/c044c314039e1f58bc2876e3b6a90f1f38ca5d835742f631f89803c717e5b548"
	I1210 06:59:05.806538  873610 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/69b441d395106a405a7d991c280f2a18e0914b39a334eaa187ce579acc77e0a4/kubepods/burstable/pod66468a433bebc9a0a15d05b0ef2855e3/c044c314039e1f58bc2876e3b6a90f1f38ca5d835742f631f89803c717e5b548/freezer.state
	I1210 06:59:05.815102  873610 api_server.go:204] freezer state: "THAWED"
	I1210 06:59:05.815126  873610 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 06:59:05.828987  873610 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 06:59:05.829030  873610 status.go:463] ha-717599 apiserver status = Running (err=<nil>)
	I1210 06:59:05.829041  873610 status.go:176] ha-717599 status: &{Name:ha-717599 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:59:05.829058  873610 status.go:174] checking status of ha-717599-m02 ...
	I1210 06:59:05.829393  873610 cli_runner.go:164] Run: docker container inspect ha-717599-m02 --format={{.State.Status}}
	I1210 06:59:05.846455  873610 status.go:371] ha-717599-m02 host status = "Stopped" (err=<nil>)
	I1210 06:59:05.846549  873610 status.go:384] host is not running, skipping remaining checks
	I1210 06:59:05.846557  873610 status.go:176] ha-717599-m02 status: &{Name:ha-717599-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:59:05.846586  873610 status.go:174] checking status of ha-717599-m03 ...
	I1210 06:59:05.846965  873610 cli_runner.go:164] Run: docker container inspect ha-717599-m03 --format={{.State.Status}}
	I1210 06:59:05.865992  873610 status.go:371] ha-717599-m03 host status = "Running" (err=<nil>)
	I1210 06:59:05.866015  873610 host.go:66] Checking if "ha-717599-m03" exists ...
	I1210 06:59:05.866520  873610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-717599-m03
	I1210 06:59:05.886632  873610 host.go:66] Checking if "ha-717599-m03" exists ...
	I1210 06:59:05.887046  873610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:59:05.887095  873610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-717599-m03
	I1210 06:59:05.908676  873610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/ha-717599-m03/id_rsa Username:docker}
	I1210 06:59:06.020052  873610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:59:06.036403  873610 kubeconfig.go:125] found "ha-717599" server: "https://192.168.49.254:8443"
	I1210 06:59:06.036479  873610 api_server.go:166] Checking apiserver status ...
	I1210 06:59:06.036556  873610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:59:06.050960  873610 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1406/cgroup
	I1210 06:59:06.060259  873610 api_server.go:182] apiserver freezer: "8:freezer:/docker/4fc30b23090457d42fb5f21c9a61029196a148c5019cb1fdab71a54464767afb/kubepods/burstable/pod523e1df13828ccab311f3f672d65f597/b413cac98e94190088d3168483bda73b66145367267e32f047097e6900bf68ce"
	I1210 06:59:06.060329  873610 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4fc30b23090457d42fb5f21c9a61029196a148c5019cb1fdab71a54464767afb/kubepods/burstable/pod523e1df13828ccab311f3f672d65f597/b413cac98e94190088d3168483bda73b66145367267e32f047097e6900bf68ce/freezer.state
	I1210 06:59:06.068250  873610 api_server.go:204] freezer state: "THAWED"
	I1210 06:59:06.068279  873610 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 06:59:06.076581  873610 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 06:59:06.076612  873610 status.go:463] ha-717599-m03 apiserver status = Running (err=<nil>)
	I1210 06:59:06.076623  873610 status.go:176] ha-717599-m03 status: &{Name:ha-717599-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:59:06.076641  873610 status.go:174] checking status of ha-717599-m04 ...
	I1210 06:59:06.077010  873610 cli_runner.go:164] Run: docker container inspect ha-717599-m04 --format={{.State.Status}}
	I1210 06:59:06.097177  873610 status.go:371] ha-717599-m04 host status = "Running" (err=<nil>)
	I1210 06:59:06.097205  873610 host.go:66] Checking if "ha-717599-m04" exists ...
	I1210 06:59:06.097574  873610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-717599-m04
	I1210 06:59:06.115161  873610 host.go:66] Checking if "ha-717599-m04" exists ...
	I1210 06:59:06.115480  873610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:59:06.115528  873610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-717599-m04
	I1210 06:59:06.134566  873610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33550 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/ha-717599-m04/id_rsa Username:docker}
	I1210 06:59:06.235809  873610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:59:06.251468  873610 status.go:176] ha-717599-m04 status: &{Name:ha-717599-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 node start m02 --alsologtostderr -v 5: (12.12875207s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5: (1.427886799s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.480872227s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (110.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 stop --alsologtostderr -v 5: (37.595232659s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 start --wait true --alsologtostderr -v 5
E1210 07:00:14.423903  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:00:24.251229  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:00:38.853293  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:00:51.956103  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 start --wait true --alsologtostderr -v 5: (1m13.177930111s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (110.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 node delete m03 --alsologtostderr -v 5: (10.125283072s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 stop --alsologtostderr -v 5: (36.207750139s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5: exit status 7 (125.807626ms)

                                                
                                                
-- stdout --
	ha-717599
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-717599-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-717599-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:02:01.276059  888477 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:02:01.276288  888477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:02:01.276319  888477 out.go:374] Setting ErrFile to fd 2...
	I1210 07:02:01.276338  888477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:02:01.277206  888477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:02:01.277499  888477 out.go:368] Setting JSON to false
	I1210 07:02:01.277566  888477 mustload.go:66] Loading cluster: ha-717599
	I1210 07:02:01.277650  888477 notify.go:221] Checking for updates...
	I1210 07:02:01.278753  888477 config.go:182] Loaded profile config "ha-717599": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 07:02:01.278818  888477 status.go:174] checking status of ha-717599 ...
	I1210 07:02:01.279472  888477 cli_runner.go:164] Run: docker container inspect ha-717599 --format={{.State.Status}}
	I1210 07:02:01.300940  888477 status.go:371] ha-717599 host status = "Stopped" (err=<nil>)
	I1210 07:02:01.300963  888477 status.go:384] host is not running, skipping remaining checks
	I1210 07:02:01.300969  888477 status.go:176] ha-717599 status: &{Name:ha-717599 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 07:02:01.301000  888477 status.go:174] checking status of ha-717599-m02 ...
	I1210 07:02:01.301314  888477 cli_runner.go:164] Run: docker container inspect ha-717599-m02 --format={{.State.Status}}
	I1210 07:02:01.330685  888477 status.go:371] ha-717599-m02 host status = "Stopped" (err=<nil>)
	I1210 07:02:01.330710  888477 status.go:384] host is not running, skipping remaining checks
	I1210 07:02:01.330718  888477 status.go:176] ha-717599-m02 status: &{Name:ha-717599-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 07:02:01.330737  888477 status.go:174] checking status of ha-717599-m04 ...
	I1210 07:02:01.331079  888477 cli_runner.go:164] Run: docker container inspect ha-717599-m04 --format={{.State.Status}}
	I1210 07:02:01.348273  888477 status.go:371] ha-717599-m04 host status = "Stopped" (err=<nil>)
	I1210 07:02:01.348300  888477 status.go:384] host is not running, skipping remaining checks
	I1210 07:02:01.348307  888477 status.go:176] ha-717599-m04 status: &{Name:ha-717599-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1210 07:02:35.782605  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.126668798s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (53.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 node add --control-plane --alsologtostderr -v 5: (52.351296777s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-717599 status --alsologtostderr -v 5: (1.063202649s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (53.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.079386498s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-920018 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-920018 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (51.992067819s)
--- PASS: TestJSONOutput/start/Command (52.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-920018 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-920018 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-920018 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-920018 --output=json --user=testUser: (5.94024054s)
--- PASS: TestJSONOutput/stop/Command (5.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-920986 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-920986 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.097961ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9bb08bf7-bd8c-4734-ad88-cbd963eb7ed8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-920986] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca9dc357-26f5-4a07-89fc-11eebe9c1964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22089"}}
	{"specversion":"1.0","id":"3f0abec1-511f-4f19-bd5e-cf77be71a18c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ff44f383-26fe-428a-a404-c13e33b1bf56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig"}}
	{"specversion":"1.0","id":"ddd5cec1-1670-4fd4-b65f-af143b1d82a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube"}}
	{"specversion":"1.0","id":"e133f65f-52be-4811-801e-91dae15a4145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a4f04d6e-15c1-4fae-b892-c50902f0ee0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6216ca15-fef6-49d4-a1b7-fbf8bd944569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-920986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-920986
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-877820 --network=
E1210 07:05:14.424412  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:05:24.254614  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-877820 --network=: (38.343018501s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-877820" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-877820
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-877820: (2.252604849s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.21s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-333980 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-333980 --network=bridge: (33.07427816s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-333980" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-333980
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-333980: (2.107606522s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.21s)

                                                
                                    
x
+
TestKicExistingNetwork (37.77s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 07:06:24.849152  786751 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 07:06:24.867143  786751 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 07:06:24.867233  786751 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 07:06:24.867260  786751 cli_runner.go:164] Run: docker network inspect existing-network
W1210 07:06:24.885210  786751 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 07:06:24.885248  786751 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 07:06:24.885263  786751 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 07:06:24.885372  786751 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 07:06:24.903638  786751 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7092cc4ae12c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:9b:65:77:38:2f} reservation:<nil>}
I1210 07:06:24.903942  786751 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018e3900}
I1210 07:06:24.903968  786751 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 07:06:24.904036  786751 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 07:06:24.962308  786751 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-335632 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-335632 --network=existing-network: (35.483621095s)
helpers_test.go:176: Cleaning up "existing-network-335632" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-335632
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-335632: (2.136650963s)
I1210 07:07:02.599699  786751 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.77s)

                                                
                                    
x
+
TestKicCustomSubnet (35.57s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-213396 --subnet=192.168.60.0/24
E1210 07:07:35.782308  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-213396 --subnet=192.168.60.0/24: (33.319828358s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-213396 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-213396" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-213396
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-213396: (2.22820117s)
--- PASS: TestKicCustomSubnet (35.57s)

                                                
                                    
x
+
TestKicStaticIP (37.7s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-951594 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-951594 --static-ip=192.168.200.200: (35.328284455s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-951594 ip
helpers_test.go:176: Cleaning up "static-ip-951594" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-951594
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-951594: (2.213461068s)
--- PASS: TestKicStaticIP (37.70s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-641378 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-641378 --driver=docker  --container-runtime=containerd: (32.277424155s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-644242 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-644242 --driver=docker  --container-runtime=containerd: (33.033187761s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-641378
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-644242
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-644242" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-644242
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-644242: (2.1510649s)
helpers_test.go:176: Cleaning up "first-641378" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-641378
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-641378: (2.398847161s)
--- PASS: TestMinikubeProfile (71.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-436586 --memory=3072 --mount-string /tmp/TestMountStartserial2197043582/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-436586 --memory=3072 --mount-string /tmp/TestMountStartserial2197043582/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.35823565s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-436586 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-438679 --memory=3072 --mount-string /tmp/TestMountStartserial2197043582/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-438679 --memory=3072 --mount-string /tmp/TestMountStartserial2197043582/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.344365156s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-438679 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-436586 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-436586 --alsologtostderr -v=5: (1.727234005s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-438679 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-438679
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-438679: (1.29425052s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-438679
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-438679: (6.419079071s)
--- PASS: TestMountStart/serial/RestartStopped (7.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-438679 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-912833 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1210 07:09:57.491465  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:10:14.424763  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:10:24.250597  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-912833 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.769513864s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-912833 -- rollout status deployment/busybox: (3.632548113s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-h8kzw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-zqzhd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-h8kzw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-zqzhd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-h8kzw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-zqzhd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-h8kzw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-h8kzw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-zqzhd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-912833 -- exec busybox-7b57f96db7-zqzhd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-912833 -v=5 --alsologtostderr
E1210 07:11:47.318631  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-912833 -v=5 --alsologtostderr: (58.656572788s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-912833 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp testdata/cp-test.txt multinode-912833:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp multinode-912833:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile971104374/001/cp-test_multinode-912833.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp multinode-912833:/home/docker/cp-test.txt multinode-912833-m02:/home/docker/cp-test_multinode-912833_multinode-912833-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m02 "sudo cat /home/docker/cp-test_multinode-912833_multinode-912833-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp multinode-912833:/home/docker/cp-test.txt multinode-912833-m03:/home/docker/cp-test_multinode-912833_multinode-912833-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m03 "sudo cat /home/docker/cp-test_multinode-912833_multinode-912833-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp testdata/cp-test.txt multinode-912833-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp multinode-912833-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile971104374/001/cp-test_multinode-912833-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp multinode-912833-m02:/home/docker/cp-test.txt multinode-912833:/home/docker/cp-test_multinode-912833-m02_multinode-912833.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833 "sudo cat /home/docker/cp-test_multinode-912833-m02_multinode-912833.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp multinode-912833-m02:/home/docker/cp-test.txt multinode-912833-m03:/home/docker/cp-test_multinode-912833-m02_multinode-912833-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m03 "sudo cat /home/docker/cp-test_multinode-912833-m02_multinode-912833-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp testdata/cp-test.txt multinode-912833-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp multinode-912833-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile971104374/001/cp-test_multinode-912833-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp multinode-912833-m03:/home/docker/cp-test.txt multinode-912833:/home/docker/cp-test_multinode-912833-m03_multinode-912833.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833 "sudo cat /home/docker/cp-test_multinode-912833-m03_multinode-912833.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 cp multinode-912833-m03:/home/docker/cp-test.txt multinode-912833-m02:/home/docker/cp-test_multinode-912833-m03_multinode-912833-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 ssh -n multinode-912833-m02 "sudo cat /home/docker/cp-test_multinode-912833-m03_multinode-912833-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-912833 node stop m03: (1.325367371s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-912833 status: exit status 7 (539.214705ms)

                                                
                                                
-- stdout --
	multinode-912833
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-912833-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-912833-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-912833 status --alsologtostderr: exit status 7 (554.227781ms)

                                                
                                                
-- stdout --
	multinode-912833
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-912833-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-912833-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:12:32.712540  941525 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:12:32.712784  941525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:12:32.712813  941525 out.go:374] Setting ErrFile to fd 2...
	I1210 07:12:32.712834  941525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:12:32.713308  941525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:12:32.713590  941525 out.go:368] Setting JSON to false
	I1210 07:12:32.713659  941525 mustload.go:66] Loading cluster: multinode-912833
	I1210 07:12:32.713814  941525 notify.go:221] Checking for updates...
	I1210 07:12:32.714215  941525 config.go:182] Loaded profile config "multinode-912833": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 07:12:32.714257  941525 status.go:174] checking status of multinode-912833 ...
	I1210 07:12:32.714889  941525 cli_runner.go:164] Run: docker container inspect multinode-912833 --format={{.State.Status}}
	I1210 07:12:32.733535  941525 status.go:371] multinode-912833 host status = "Running" (err=<nil>)
	I1210 07:12:32.733559  941525 host.go:66] Checking if "multinode-912833" exists ...
	I1210 07:12:32.733865  941525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-912833
	I1210 07:12:32.765867  941525 host.go:66] Checking if "multinode-912833" exists ...
	I1210 07:12:32.766183  941525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:12:32.766229  941525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-912833
	I1210 07:12:32.786861  941525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33655 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/multinode-912833/id_rsa Username:docker}
	I1210 07:12:32.887346  941525 ssh_runner.go:195] Run: systemctl --version
	I1210 07:12:32.894203  941525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:12:32.907615  941525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:12:32.966191  941525 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-10 07:12:32.956195513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:12:32.966798  941525 kubeconfig.go:125] found "multinode-912833" server: "https://192.168.67.2:8443"
	I1210 07:12:32.966838  941525 api_server.go:166] Checking apiserver status ...
	I1210 07:12:32.966886  941525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:32.978803  941525 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1375/cgroup
	I1210 07:12:32.987371  941525 api_server.go:182] apiserver freezer: "8:freezer:/docker/c2dff9604d318f2a3f198d25ce5485d5d9ff6a402e58bd1680506fb5185e358c/kubepods/burstable/podf19d2b936bb6341d6a0c470244c92a97/f2627694b08fae47dd7a5b991b6c0c7998aba870078bfb3688660b300c0c7a0c"
	I1210 07:12:32.987502  941525 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c2dff9604d318f2a3f198d25ce5485d5d9ff6a402e58bd1680506fb5185e358c/kubepods/burstable/podf19d2b936bb6341d6a0c470244c92a97/f2627694b08fae47dd7a5b991b6c0c7998aba870078bfb3688660b300c0c7a0c/freezer.state
	I1210 07:12:32.997357  941525 api_server.go:204] freezer state: "THAWED"
	I1210 07:12:32.997406  941525 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1210 07:12:33.007155  941525 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1210 07:12:33.007191  941525 status.go:463] multinode-912833 apiserver status = Running (err=<nil>)
	I1210 07:12:33.007203  941525 status.go:176] multinode-912833 status: &{Name:multinode-912833 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 07:12:33.007230  941525 status.go:174] checking status of multinode-912833-m02 ...
	I1210 07:12:33.007616  941525 cli_runner.go:164] Run: docker container inspect multinode-912833-m02 --format={{.State.Status}}
	I1210 07:12:33.026546  941525 status.go:371] multinode-912833-m02 host status = "Running" (err=<nil>)
	I1210 07:12:33.026574  941525 host.go:66] Checking if "multinode-912833-m02" exists ...
	I1210 07:12:33.026905  941525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-912833-m02
	I1210 07:12:33.048886  941525 host.go:66] Checking if "multinode-912833-m02" exists ...
	I1210 07:12:33.049210  941525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:12:33.049251  941525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-912833-m02
	I1210 07:12:33.075325  941525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33660 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/multinode-912833-m02/id_rsa Username:docker}
	I1210 07:12:33.171888  941525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:12:33.185253  941525 status.go:176] multinode-912833-m02 status: &{Name:multinode-912833-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 07:12:33.185302  941525 status.go:174] checking status of multinode-912833-m03 ...
	I1210 07:12:33.185641  941525 cli_runner.go:164] Run: docker container inspect multinode-912833-m03 --format={{.State.Status}}
	I1210 07:12:33.203235  941525 status.go:371] multinode-912833-m03 host status = "Stopped" (err=<nil>)
	I1210 07:12:33.203259  941525 status.go:384] host is not running, skipping remaining checks
	I1210 07:12:33.203267  941525 status.go:176] multinode-912833-m03 status: &{Name:multinode-912833-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 node start m03 -v=5 --alsologtostderr
E1210 07:12:35.782688  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-912833 node start m03 -v=5 --alsologtostderr: (6.983776611s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-912833
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-912833
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-912833: (25.12298503s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-912833 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-912833 --wait=true -v=5 --alsologtostderr: (54.225713526s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-912833
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-912833 node delete m03: (4.98007199s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-912833 stop: (23.978640889s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-912833 status: exit status 7 (110.333609ms)

                                                
                                                
-- stdout --
	multinode-912833
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-912833-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-912833 status --alsologtostderr: exit status 7 (100.471181ms)

                                                
                                                
-- stdout --
	multinode-912833
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-912833-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:14:30.264287  950300 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:14:30.264426  950300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:14:30.264438  950300 out.go:374] Setting ErrFile to fd 2...
	I1210 07:14:30.264444  950300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:14:30.264707  950300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:14:30.264911  950300 out.go:368] Setting JSON to false
	I1210 07:14:30.264958  950300 mustload.go:66] Loading cluster: multinode-912833
	I1210 07:14:30.265030  950300 notify.go:221] Checking for updates...
	I1210 07:14:30.266062  950300 config.go:182] Loaded profile config "multinode-912833": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 07:14:30.266093  950300 status.go:174] checking status of multinode-912833 ...
	I1210 07:14:30.266775  950300 cli_runner.go:164] Run: docker container inspect multinode-912833 --format={{.State.Status}}
	I1210 07:14:30.284908  950300 status.go:371] multinode-912833 host status = "Stopped" (err=<nil>)
	I1210 07:14:30.284928  950300 status.go:384] host is not running, skipping remaining checks
	I1210 07:14:30.284935  950300 status.go:176] multinode-912833 status: &{Name:multinode-912833 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 07:14:30.284965  950300 status.go:174] checking status of multinode-912833-m02 ...
	I1210 07:14:30.285289  950300 cli_runner.go:164] Run: docker container inspect multinode-912833-m02 --format={{.State.Status}}
	I1210 07:14:30.311895  950300 status.go:371] multinode-912833-m02 host status = "Stopped" (err=<nil>)
	I1210 07:14:30.311961  950300 status.go:384] host is not running, skipping remaining checks
	I1210 07:14:30.311996  950300 status.go:176] multinode-912833-m02 status: &{Name:multinode-912833-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-912833 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1210 07:15:14.424546  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:15:24.250808  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-912833 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (57.161566468s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-912833 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-912833
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-912833-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-912833-m02 --driver=docker  --container-runtime=containerd: exit status 14 (96.94296ms)

                                                
                                                
-- stdout --
	* [multinode-912833-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-912833-m02' is duplicated with machine name 'multinode-912833-m02' in profile 'multinode-912833'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-912833-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-912833-m03 --driver=docker  --container-runtime=containerd: (32.09485409s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-912833
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-912833: exit status 80 (357.854145ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-912833 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-912833-m03 already exists in multinode-912833-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-912833-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-912833-m03: (2.059989679s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.67s)

                                                
                                    
x
+
TestPreload (116.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-200938 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-200938 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (57.872121091s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-200938 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-200938 image pull gcr.io/k8s-minikube/busybox: (2.12518299s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-200938
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-200938: (5.945042986s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-200938 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1210 07:17:18.854964  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:17:35.782620  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-200938 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (47.924618171s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-200938 image list
helpers_test.go:176: Cleaning up "test-preload-200938" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-200938
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-200938: (2.391271022s)
--- PASS: TestPreload (116.52s)

                                                
                                    
x
+
TestScheduledStopUnix (108.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-107414 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-107414 --memory=3072 --driver=docker  --container-runtime=containerd: (31.884392846s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-107414 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 07:18:35.586970  966169 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:18:35.587101  966169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:18:35.587113  966169 out.go:374] Setting ErrFile to fd 2...
	I1210 07:18:35.587120  966169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:18:35.587480  966169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:18:35.587777  966169 out.go:368] Setting JSON to false
	I1210 07:18:35.587912  966169 mustload.go:66] Loading cluster: scheduled-stop-107414
	I1210 07:18:35.588821  966169 config.go:182] Loaded profile config "scheduled-stop-107414": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 07:18:35.589105  966169 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/config.json ...
	I1210 07:18:35.589350  966169 mustload.go:66] Loading cluster: scheduled-stop-107414
	I1210 07:18:35.589521  966169 config.go:182] Loaded profile config "scheduled-stop-107414": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-107414 -n scheduled-stop-107414
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-107414 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 07:18:36.040472  966259 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:18:36.040580  966259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:18:36.040652  966259 out.go:374] Setting ErrFile to fd 2...
	I1210 07:18:36.040662  966259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:18:36.041150  966259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:18:36.041420  966259 out.go:368] Setting JSON to false
	I1210 07:18:36.041624  966259 daemonize_unix.go:73] killing process 966188 as it is an old scheduled stop
	I1210 07:18:36.041729  966259 mustload.go:66] Loading cluster: scheduled-stop-107414
	I1210 07:18:36.042120  966259 config.go:182] Loaded profile config "scheduled-stop-107414": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 07:18:36.042195  966259 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/config.json ...
	I1210 07:18:36.042371  966259 mustload.go:66] Loading cluster: scheduled-stop-107414
	I1210 07:18:36.042512  966259 config.go:182] Loaded profile config "scheduled-stop-107414": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 07:18:36.052099  786751 retry.go:31] will retry after 114.737µs: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.052481  786751 retry.go:31] will retry after 218.212µs: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.053647  786751 retry.go:31] will retry after 296.068µs: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.054806  786751 retry.go:31] will retry after 495.24µs: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.055964  786751 retry.go:31] will retry after 546.558µs: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.057114  786751 retry.go:31] will retry after 1.071457ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.058251  786751 retry.go:31] will retry after 779.557µs: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.059337  786751 retry.go:31] will retry after 1.498028ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.061633  786751 retry.go:31] will retry after 2.5543ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.064904  786751 retry.go:31] will retry after 2.45062ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.068152  786751 retry.go:31] will retry after 8.499823ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.077376  786751 retry.go:31] will retry after 6.109661ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.084614  786751 retry.go:31] will retry after 13.629724ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.098963  786751 retry.go:31] will retry after 18.430764ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.118139  786751 retry.go:31] will retry after 31.676377ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
I1210 07:18:36.150363  786751 retry.go:31] will retry after 23.116981ms: open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-107414 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-107414 -n scheduled-stop-107414
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-107414
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-107414 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 07:19:01.946950  966927 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:19:01.947087  966927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:19:01.947111  966927 out.go:374] Setting ErrFile to fd 2...
	I1210 07:19:01.947124  966927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:19:01.947415  966927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:19:01.947785  966927 out.go:368] Setting JSON to false
	I1210 07:19:01.947975  966927 mustload.go:66] Loading cluster: scheduled-stop-107414
	I1210 07:19:01.948480  966927 config.go:182] Loaded profile config "scheduled-stop-107414": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1210 07:19:01.950312  966927 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/scheduled-stop-107414/config.json ...
	I1210 07:19:01.950614  966927 mustload.go:66] Loading cluster: scheduled-stop-107414
	I1210 07:19:01.950805  966927 config.go:182] Loaded profile config "scheduled-stop-107414": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-107414
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-107414: exit status 7 (71.070753ms)

                                                
                                                
-- stdout --
	scheduled-stop-107414
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-107414 -n scheduled-stop-107414
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-107414 -n scheduled-stop-107414: exit status 7 (71.00146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-107414" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-107414
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-107414: (4.740493764s)
--- PASS: TestScheduledStopUnix (108.20s)

                                                
                                    
x
+
TestInsufficientStorage (12.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-751935 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-751935 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.097052268s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e40637b3-101a-4e2e-ab7a-003dd6faf75b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-751935] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"673eba20-b75d-44d2-a55b-e8f6a1c38dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22089"}}
	{"specversion":"1.0","id":"40da3c65-e1e5-47f9-ab7c-384d556ecd10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3a9f7a4a-4172-4e0d-9b3e-4e7c1a0037c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig"}}
	{"specversion":"1.0","id":"3e576ab0-a3c5-4125-b43f-ced132687d7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube"}}
	{"specversion":"1.0","id":"5d7e1da9-a72a-48b1-84e3-55f3deef6d37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2593d66f-9ea6-43e5-90e6-68b534329a94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9ae67165-0cbd-4730-8ce4-a820900b3e23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0c42ac96-9a7d-44bf-b276-34806a0eeafd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ec9c27e7-b8d3-47b8-8c73-bed4e8dcfe01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c265621-e8af-4300-a0f6-45af2fc1077a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"931ee360-d364-40fc-a6ef-c7e652502498","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-751935\" primary control-plane node in \"insufficient-storage-751935\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ad07db3-02ee-4a00-b5f9-86058c8d1bbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765319469-22089 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4912d9b1-b4ca-438d-b751-3cb775ed8949","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d52e12d5-a64b-4c2d-976d-0998ca980f09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-751935 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-751935 --output=json --layout=cluster: exit status 7 (299.928994ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-751935","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-751935","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:20:02.224549  968752 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-751935" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-751935 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-751935 --output=json --layout=cluster: exit status 7 (286.774695ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-751935","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-751935","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:20:02.513044  968818 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-751935" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
	E1210 07:20:02.522815  968818 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/insufficient-storage-751935/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-751935" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-751935
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-751935: (1.966465803s)
--- PASS: TestInsufficientStorage (12.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (314.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3803919505 start -p running-upgrade-571323 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1210 07:27:35.782555  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3803919505 start -p running-upgrade-571323 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (35.857412939s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-571323 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1210 07:28:27.320401  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:14.423897  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:24.251179  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-571323 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m34.976935274s)
helpers_test.go:176: Cleaning up "running-upgrade-571323" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-571323
E1210 07:32:35.783240  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-571323: (1.996103623s)
--- PASS: TestRunningBinaryUpgrade (314.50s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.32s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3948401295 start -p missing-upgrade-937646 --memory=3072 --driver=docker  --container-runtime=containerd
E1210 07:20:14.423992  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:20:24.250592  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3948401295 start -p missing-upgrade-937646 --memory=3072 --driver=docker  --container-runtime=containerd: (59.184909207s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-937646
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-937646
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-937646 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-937646 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.815965959s)
helpers_test.go:176: Cleaning up "missing-upgrade-937646" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-937646
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-937646: (2.398596696s)
--- PASS: TestMissingContainerUpgrade (123.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982364 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-982364 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (105.648468ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-982364] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982364 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-982364 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (46.112994988s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-982364 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982364 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-982364 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.867486373s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-982364 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-982364 status -o json: exit status 2 (299.069702ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-982364","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-982364
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-982364: (2.034971266s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982364 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-982364 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.066325243s)
--- PASS: TestNoKubernetes/serial/Start (7.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-982364 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-982364 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.494368ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-982364
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-982364: (1.393328213s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982364 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-982364 --driver=docker  --container-runtime=containerd: (7.159184079s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-982364 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-982364 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.420145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (308.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1873057357 start -p stopped-upgrade-812950 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1873057357 start -p stopped-upgrade-812950 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (38.82794091s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1873057357 -p stopped-upgrade-812950 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1873057357 -p stopped-upgrade-812950 stop: (1.238271844s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-812950 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1210 07:25:14.423693  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:25:24.250639  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:26:37.492979  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-812950 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m28.334207341s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (308.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-812950
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-812950: (2.086898313s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.09s)

                                                
                                    
x
+
TestPause/serial/Start (50.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-966225 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-966225 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (50.649865963s)
--- PASS: TestPause/serial/Start (50.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-966225 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-966225 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.209274255s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-966225 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-966225 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-966225 --output=json --layout=cluster: exit status 2 (324.952441ms)

                                                
                                                
-- stdout --
	{"Name":"pause-966225","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-966225","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-966225 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-966225 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.82s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-966225 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-966225 --alsologtostderr -v=5: (2.819320823s)
--- PASS: TestPause/serial/DeletePaused (2.82s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-966225
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-966225: exit status 1 (18.233329ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-966225: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-945825 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-945825 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (201.747694ms)

                                                
                                                
-- stdout --
	* [false-945825] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:34:16.264705 1028709 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:34:16.264905 1028709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:34:16.264917 1028709 out.go:374] Setting ErrFile to fd 2...
	I1210 07:34:16.264922 1028709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:34:16.265199 1028709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
	I1210 07:34:16.265642 1028709 out.go:368] Setting JSON to false
	I1210 07:34:16.266562 1028709 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22581,"bootTime":1765329476,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1210 07:34:16.266628 1028709 start.go:143] virtualization:  
	I1210 07:34:16.270337 1028709 out.go:179] * [false-945825] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:34:16.273435 1028709 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:34:16.273501 1028709 notify.go:221] Checking for updates...
	I1210 07:34:16.279245 1028709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:34:16.282173 1028709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
	I1210 07:34:16.285181 1028709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
	I1210 07:34:16.288046 1028709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:34:16.290886 1028709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:34:16.295404 1028709 config.go:182] Loaded profile config "kubernetes-upgrade-006690": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1210 07:34:16.295583 1028709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:34:16.328024 1028709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:34:16.328170 1028709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:34:16.394879 1028709 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:34:16.379400186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:34:16.395016 1028709 docker.go:319] overlay module found
	I1210 07:34:16.398160 1028709 out.go:179] * Using the docker driver based on user configuration
	I1210 07:34:16.400972 1028709 start.go:309] selected driver: docker
	I1210 07:34:16.401000 1028709 start.go:927] validating driver "docker" against <nil>
	I1210 07:34:16.401013 1028709 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:34:16.404551 1028709 out.go:203] 
	W1210 07:34:16.407602 1028709 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1210 07:34:16.410591 1028709 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-945825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-945825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 07:22:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-006690
contexts:
- context:
cluster: kubernetes-upgrade-006690
user: kubernetes-upgrade-006690
name: kubernetes-upgrade-006690
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-006690
user:
client-certificate: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/client.crt
client-key: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-945825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-945825"

                                                
                                                
----------------------- debugLogs end: false-945825 [took: 3.290121463s] --------------------------------
helpers_test.go:176: Cleaning up "false-945825" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-945825
--- PASS: TestNetworkPlugins/group/false (3.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (56.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-166796 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-166796 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (56.700655556s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (56.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-166796 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [8c0bd097-52a1-4e4a-97e6-a340f698605e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [8c0bd097-52a1-4e4a-97e6-a340f698605e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003650111s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-166796 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-166796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-166796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.018128412s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-166796 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-166796 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-166796 --alsologtostderr -v=3: (12.147597702s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-166796 -n old-k8s-version-166796
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-166796 -n old-k8s-version-166796: exit status 7 (79.044541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-166796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (56.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-166796 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1210 07:37:35.782754  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-166796 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (55.63025981s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-166796 -n old-k8s-version-166796
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (56.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-nn7rr" [3cb45e6c-9560-4cd4-af71-18976a8e507f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003399479s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-nn7rr" [3cb45e6c-9560-4cd4-af71-18976a8e507f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003723604s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-166796 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-166796 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-166796 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-166796 -n old-k8s-version-166796
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-166796 -n old-k8s-version-166796: exit status 2 (363.793166ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-166796 -n old-k8s-version-166796
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-166796 -n old-k8s-version-166796: exit status 2 (338.331506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-166796 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-166796 -n old-k8s-version-166796
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-166796 -n old-k8s-version-166796
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (56.92531363s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (53.6166917s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-444518 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [435c6c05-7afb-41be-9dad-a1ec31827089] Pending
helpers_test.go:353: "busybox" [435c6c05-7afb-41be-9dad-a1ec31827089] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [435c6c05-7afb-41be-9dad-a1ec31827089] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003795212s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-444518 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-444518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-444518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.167666935s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-444518 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-444518 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-444518 --alsologtostderr -v=3: (12.323610799s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-444518 -n default-k8s-diff-port-444518
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-444518 -n default-k8s-diff-port-444518: exit status 7 (73.602672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-444518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-444518 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (59.080167195s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-444518 -n default-k8s-diff-port-444518
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-254586 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7d8fb77a-2110-40a9-8bd0-2aa43990bb46] Pending
helpers_test.go:353: "busybox" [7d8fb77a-2110-40a9-8bd0-2aa43990bb46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7d8fb77a-2110-40a9-8bd0-2aa43990bb46] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00336632s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-254586 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-254586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.190910277s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-254586 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-254586 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-254586 --alsologtostderr -v=3: (12.685159743s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-254586 -n embed-certs-254586
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-254586 -n embed-certs-254586: exit status 7 (81.394529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-254586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1210 07:40:14.424085  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:24.250404  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-254586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (49.84890287s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-254586 -n embed-certs-254586
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-6lw2c" [4d1987de-2590-40c2-b96e-631abf239d8a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003059696s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-6lw2c" [4d1987de-2590-40c2-b96e-631abf239d8a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003782467s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-444518 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-444518 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-444518 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-444518 -n default-k8s-diff-port-444518
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-444518 -n default-k8s-diff-port-444518: exit status 2 (327.21199ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-444518 -n default-k8s-diff-port-444518
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-444518 -n default-k8s-diff-port-444518: exit status 2 (350.602035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-444518 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-444518 -n default-k8s-diff-port-444518
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-444518 -n default-k8s-diff-port-444518
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-7xmb2" [8588d6eb-3e14-4544-9522-29c8b1d5a0c0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003952552s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-7xmb2" [8588d6eb-3e14-4544-9522-29c8b1d5a0c0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00334902s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-254586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-254586 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-254586 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-254586 --alsologtostderr -v=1: (1.454289626s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-254586 -n embed-certs-254586
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-254586 -n embed-certs-254586: exit status 2 (649.60818ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-254586 -n embed-certs-254586
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-254586 -n embed-certs-254586: exit status 2 (511.029752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-254586 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-254586 -n embed-certs-254586
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-254586 -n embed-certs-254586
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-587009 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-587009 --alsologtostderr -v=3: (1.303436665s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-587009 -n no-preload-587009: exit status 7 (71.828013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-587009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-237317 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-237317 --alsologtostderr -v=3: (1.317283178s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-237317 -n newest-cni-237317: exit status 7 (70.951972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-237317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-237317 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m20.523015265s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-945825 "pgrep -a kubelet"
I1210 07:59:03.687139  786751 config.go:182] Loaded profile config "auto-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-945825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-pj4pz" [001f00b9-004b-462f-8c86-907fc68259e9] Pending
helpers_test.go:353: "netcat-cd4db9dbf-pj4pz" [001f00b9-004b-462f-8c86-907fc68259e9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003333087s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-945825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m4.409221194s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-j5j64" [db0a531d-a8d6-42ee-ac69-9147c2e79dfd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003693576s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-945825 "pgrep -a kubelet"
I1210 08:00:46.551048  786751 config.go:182] Loaded profile config "flannel-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-945825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2brff" [508761fc-8d06-48fc-9203-1588fbfabc57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2brff" [508761fc-8d06-48fc-9203-1588fbfabc57] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003620475s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-945825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (57.006273445s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-fc8m7" [2501413c-8ccd-4402-8591-ed4c04fea2d9] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-fc8m7" [2501413c-8ccd-4402-8591-ed4c04fea2d9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003409109s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-945825 "pgrep -a kubelet"
I1210 08:02:21.828917  786751 config.go:182] Loaded profile config "calico-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-945825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5crzt" [60dd7e18-af8c-4a0d-8c76-5eb3554f8e90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5crzt" [60dd7e18-af8c-4a0d-8c76-5eb3554f8e90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004592968s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-945825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.101402984s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-945825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-945825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nblxj" [e07b6fae-b2b5-4590-a275-a4ff444ad42a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-nblxj" [e07b6fae-b2b5-4590-a275-a4ff444ad42a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004040574s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-945825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1210 08:04:24.436917  786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/auto-945825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m22.989454666s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-nw24j" [630b0c33-78c4-4040-871c-a94d16497f7a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003784531s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-945825 "pgrep -a kubelet"
I1210 08:05:53.718158  786751 config.go:182] Loaded profile config "kindnet-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-945825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-r9sk5" [edcfaefb-4c32-40ff-8ac9-680a73fb2b0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-r9sk5" [edcfaefb-4c32-40ff-8ac9-680a73fb2b0b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003124047s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-945825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m14.231194762s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-945825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-945825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cbgtr" [94d1916d-367a-4ee2-b613-f028dc84f36b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cbgtr" [94d1916d-367a-4ee2-b613-f028dc84f36b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00410847s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-945825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-945825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m11.175656886s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-945825 "pgrep -a kubelet"
I1210 08:09:22.526150  786751 config.go:182] Loaded profile config "enable-default-cni-945825": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-945825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9gpjg" [b7d81aa7-74ef-4714-bc67-ccd2db70e329] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-9gpjg" [b7d81aa7-74ef-4714-bc67-ccd2db70e329] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004587062s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-945825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-945825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.42
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
153 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.18
392 TestNetworkPlugins/group/kubenet 3.63
400 TestNetworkPlugins/group/cilium 3.96
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-927891 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-927891" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-927891
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-262664" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-262664
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-945825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-945825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 07:22:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-006690
contexts:
- context:
cluster: kubernetes-upgrade-006690
user: kubernetes-upgrade-006690
name: kubernetes-upgrade-006690
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-006690
user:
client-certificate: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/client.crt
client-key: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-945825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-945825"

                                                
                                                
----------------------- debugLogs end: kubenet-945825 [took: 3.450877257s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-945825" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-945825
--- SKIP: TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-945825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-945825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 07:22:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-006690
contexts:
- context:
cluster: kubernetes-upgrade-006690
user: kubernetes-upgrade-006690
name: kubernetes-upgrade-006690
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-006690
user:
client-certificate: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/client.crt
client-key: /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/kubernetes-upgrade-006690/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-945825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-945825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-945825"

                                                
                                                
----------------------- debugLogs end: cilium-945825 [took: 3.804194471s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-945825" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-945825
--- SKIP: TestNetworkPlugins/group/cilium (3.96s)

                                                
                                    
Copied to clipboard